Skip to main content

Home/ History Readings/ Group items tagged harm

Rss Feed Group items tagged

Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
Javier E

Opinion | Civil Liberties Make for Strange Bedfellows - The New York Times - 0 views

  • where is the line between government persuasion and government coercion?
  • When the government can pick sides in an ideological debate and wield its power to suppress opposing views, then you’ve laid the foundation for authoritarianism. If free speech is the “dread of tyrants,” then censorship is one of the tyrant’s greatest weapons.
  • In Justice Sotomayor’s words, “At the heart of the First Amendment’s Free Speech Clause is the recognition that viewpoint discrimination is uniquely harmful to a free and democratic society.”
  • ...2 more annotations...
  • As Douglass argued, “To suppress free speech is a double wrong. It violates the rights of the hearer as well as those of the speaker.”
  • Acts of intimidation are as grave a threat to free speech as restrictive government policies. Again, Douglass said it well: “There can be no right of speech where any man, however lifted up, or however humble, however young, or however old, is overawed by force, and compelled to suppress his honest sentiments.”
Javier E

What does giving up open up? - by Isabelle Drury - 0 views

  • A friend of mine recently ran a climate education session with a local university. The workshop guided the students through the science of the changing climate and the findings of the IPCC reports and, apparently, empowered them to take action.
  • Empowerment was not the reaction the students responded with.
  • Instead, they rebuffed with arguments blaming corporations for causing climate change, asking why they had to give up stuff when big businesses are allowed to freely fuck it all up; declaring their lives are hard enough already!
  • ...26 more annotations...
  • Don’t get me wrong; I don’t hold these beliefs and never buy fast fashion or a piece of plastic. I hold these beliefs whilst sometimes buying new clothes or out-of-season strawberries in a plastic container. It’s the justification I have a problem with. 
  • Because I see these arguments constantly. Why should *I* have to do XYZ if a large corporation is doing ABC? Why can’t *I* go on holiday when an insert celebrity is flying their private jet 10x a week? Why should *I* care about this thing when no one cares about this other thing?! 
  • I’ve become a staunch believer there are few excuses when it comes to actions which directly harm our planet. 
  • We live in a society where unless you’re very wealthy and VERY time-rich you cannot exist without impacting the planet. Let’s not kid ourselves into believing there is any other reason we live in this way. 
  • the real question we should be posing is what does giving this stuff up open up for us? How does living in this different way enrich and improve our lives and our wider community’s lives? 
  • You don’t overconsume because your life is hard and big corporations exist, you overconsume because you live in a society built in a way to funnel you into doing exactly that.1
  • I wrote ‘the narrative younger people are fed’ because whilst these things are true, I often feel they’re used as a way to keep us down, to keep us depressed and complacent so we don’t rebel.
  • I think these students believe these narratives to be true, doing so keeps them safe in their current way of living, and allows them to get through the day without as much mental turmoil. 
  • I’ve been there. I tried to do it all. I tried to be zero-waste-thrift-store-girly, but it drove me crazy. One person can’t live in a completely ‘sustainable’ way, without ever leaving a footprint on this planet, it’s impossible and will only leave you feeling extremely exhausted and extremely guilty. 
  • The truth is sometimes I buy new socks and plastic contact lenses. Sometimes I want to buy a nice bag and a new pair of shoes and fit in with the wider society and others in my age group. Yes, it plays directly into capitalism’s hands, yes, I am doing what the man wants me to do, I still feel guilty, and I still question all my life choices, but god damn, you gotta live. 
  • can you blame ‘em? With the narrative younger people are fed these days: you’ll never own a house; the job market is atrocious; good luck building any kind of safety net; another oil and gas line has been approved; one war is brewing and one has broken out; oh look! another recession.
  • Rather than saying we have to give things up to Save The Earth!, that we have to stop consuming to Live Sustainably!, we need to tell people why living in this alternative way is so rich, so nourishing, so plentiful, so beautiful. 
  • I’ve quoted before and will quote again from Donella Meadows: “People don’t need enormous cars; they need respect. They don’t need closets full of clothes; they need to feel attractive and they need excitement, variety and beauty. [...] People need identity, community, challenge, acknowledgement, love, joy. To try and fill these needs with material things is to set up an unquenchable appetite for false solutions to real and never-satisfied problems. The resulting psychological emptiness is one of the major forces behind the desire for material growth.”2
  • Our climate conversations–our climate education–cannot just focus on what we need to give up, instead it must focus on what we get to build and welcome into our lives when we’re not wasting our money, time, and energy on buying or not buying new clothes or plastic-wrapped food.
  • The majority of my friends growing up did not have hobbies, we found joy and community and connection in consumption. Yes, consuming less is an essential piece of the climate puzzle, but telling people they can no longer consume will not get us there, it will only be taking away many people’s only sense of joy and satisfaction in life. 
  • We will not empower people by telling them to Be More Sustainable!, we will empower people by inviting them to create a world that finds value and beauty and satisfaction in more human ways, without the dark tint capitalist society has clouded our view with. 
  • Those of us in the global north are some of the biggest individual contributors to climate change, if we all lived like the average American, we would need 5.1 Earths to sustain us all (sorry, we can’t fob it all off to corporations). 
  • But, in a way, we are often the ones who are most cut off from any possibility of reactivating older institutions, ones that know how to live in harmony with the environment and the local land and could guide us to a better future.
  • We’re so dependent on existing systems we don’t even notice they exist–until they break down. Just take away one piece of our modern lifestyles and we are suddenly unable to function. A power cut? No cooking, no heating, no warm showers, not even the ability to boil the kettle for a lukewarm bath.
  • A food supply chain issue? I don’t know a single person in my local area who grows any type of fruit or vegetables
  • we also see ourselves as the hero, we’re going to save the world with our unrealistic techno-fixes (that don’t yet exist). We have the self-important sense that if we just had the right technology we could fix all of the world’s problems and then everyone would be happy. 
  • Westerners are often cast as both the villain and the hero of climate change. We’re the villain because we’ve created so many of these problems with our unquenchable thirst to pillage, develop, and create more and more crap.
  • I don’t know how we can turn back the wheels of modernised helplessness, but whilst we figure this one out, we need to consider what we want to bring into the new world.
  • learning new skills for a new future is an act of resilience. Creating a community of individuals who can look after each other is an act of resilience. Building a better way of life for yourself–and those around you–is an act of resilience. 
  • I can’t yet name the plants I meet on my walks, nor can I name the bird calls I hear outside of my window, but I can learn how to feed my family with food grown in my community garden, support builders and creators using reclaimed materials, and connect with people who live a stone’s throw away from my front door
  • Anything we learn to do for ourselves–actions that can be taken out of the hands of large corporations in an act of helplessness–is a way of helping our Earth. This is my act of resilience. 
« First ‹ Previous 581 - 584 of 584
Showing 20 items per page