Skip to main content

Home/ History Readings/ Group items tagged kurzweil

Rss Feed Group items tagged

Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

The super-rich 'preppers' planning to save themselves from the apocalypse | The super-r... - 0 views

  • at least as far as these gentlemen were concerned, this was a talk about the future of technology.
  • Taking their cue from Tesla founder Elon Musk colonising Mars, Palantir’s Peter Thiel reversing the ageing process, or artificial intelligence developers Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had less to do with making the world a better place than it did with transcending the human condition altogether. Their extreme wealth and privilege served only to make them obsessed with insulating themselves from the very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic and resource depletion. For them, the future of technology is about only one thing: escape from the rest of us.
  • These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now they’ve reduced technological progress to a video game that one of them wins by finding the escape hatch.
  • ...13 more annotations...
  • these catastrophising billionaires are the presumptive winners of the digital economy – the supposed champions of the survival-of-the-fittest business landscape that’s fuelling most of this speculation to begin with.
  • What I came to realise was that these men are actually the losers. The billionaires who called me out to the desert to evaluate their bunker strategies are not the victors of the economic game so much as the victims of its perversely limited rules. More than anything, they have succumbed to a mindset where “winning” means earning enough money to insulate themselves from the damage they are creating by earning money in that way.
  • Never before have our society’s most powerful players assumed that the primary impact of their own conquests would be to render the world itself unliveable for everyone else
  • Nor have they ever before had the technologies through which to programme their sensibilities into the very fabric of our society. The landscape is alive with algorithms and intelligences actively encouraging these selfish and isolationist outlooks. Those sociopathic enough to embrace them are rewarded with cash and control over the rest of us. It’s a self-reinforcing feedback loop. This is new.
  • C is no hippy environmentalist but his business model is based in the same communitarian spirit I tried to convey to the billionaires: the way to keep the hungry hordes from storming the gates is by getting them food security now. So for $3m, investors not only get a maximum security compound in which to ride out the coming plague, solar storm, or electric grid collapse. They also get a stake in a potentially profitable network of local farm franchises that could reduce the probability of a catastrophic event in the first place. His business would do its best to ensure there are as few hungry children at the gate as possible when the time comes to lock down.
  • So far, JC Cole has been unable to convince anyone to invest in American Heritage Farms. That doesn’t mean no one is investing in such schemes. It’s just that the ones that attract more attention and cash don’t generally have these cooperative components. They’re more for people who want to go it alone
  • Most billionaire preppers don’t want to have to learn to get along with a community of farmers or, worse, spend their winnings funding a national food resilience programme. The mindset that requires safe havens is less concerned with preventing moral dilemmas than simply keeping them out of sight.
  • Rising S Company in Texas builds and installs bunkers and tornado shelters for as little as $40,000 for an 8ft by 12ft emergency hideout all the way up to the $8.3m luxury series “Aristocrat”, complete with pool and bowling lane. The enterprise originally catered to families seeking temporary storm shelters, before it went into the long-term apocalypse business. The company logo, complete with three crucifixes, suggests their services are geared more toward Christian evangelist preppers in red-state America than billionaire tech bros playing out sci-fi scenarios.
  • Ultra-elite shelters such as the Oppidum in the Czech Republic claim to cater to the billionaire class, and pay more attention to the long-term psychological health of residents. They provide imitation of natural light, such as a pool with a simulated sunlit garden area, a wine vault, and other amenities to make the wealthy feel at home.
  • On closer analysis, however, the probability of a fortified bunker actually protecting its occupants from the reality of, well, reality, is very slim. For one, the closed ecosystems of underground facilities are preposterously brittle. For example, an indoor, sealed hydroponic garden is vulnerable to contamination. Vertical farms with moisture sensors and computer-controlled irrigation systems look great in business plans and on the rooftops of Bay Area startups; when a palette of topsoil or a row of crops goes wrong, it can simply be pulled and replaced. The hermetically sealed apocalypse “grow room” doesn’t allow for such do-overs.
  • while a private island may be a good place to wait out a temporary plague, turning it into a self-sufficient, defensible ocean fortress is harder than it sounds. Small islands are utterly dependent on air and sea deliveries for basic staples. Solar panels and water filtration equipment need to be replaced and serviced at regular intervals. The billionaires who reside in such locales are more, not less, dependent on complex supply chains than those of us embedded in industrial civilisation.
  • If they wanted to test their bunker plans, they’d have hired a security expert from Blackwater or the Pentagon. They seemed to want something more. Their language went far beyond questions of disaster preparedness and verged on politics and philosophy: words such as individuality, sovereignty, governance and autonomy.
  • it wasn’t their actual bunker strategies I had been brought out to evaluate so much as the philosophy and mathematics they were using to justify their commitment to escape. They were working out what I’ve come to call the insulation equation: could they earn enough money to insulate themselves from the reality they were creating by earning money in this way? Was there any valid justification for striving to be so successful that they could simply leave the rest of us behind –apocalypse or not?
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
1 - 4 of 4
Showing 20 items per page