Skip to main content

Home/ TOK Friends/ Group items tagged superintelligence

Rss Feed Group items tagged

sissij

Elon Musk's New Company to Merge Human Brains with Machines | Big Think - 1 views

  • His new company Neuralink will work on linking human brains with computers, utilizing “neural lace” technology.
  • Musk talked recently about this kind of technology, seeing it as a way for human to interact with machines and superintelligencies.
  • What's next? We'll wait for the details. Elon Musk's influence on our modern life and aura certainly continue to grow, especially if he'll deliver on the promises of his various enterprises.
  •  
    My mom had a little research project on Tesla and she assigned me to do that so I know some strategies and ideas of Tesla, although not very deep. I found that Tesla and Elon Must had very innovative ideas on its product. Electrical car is the signature of Tesla. The design of the car and idea of being green is really friendly to the environment of Earth. Now, they are talking about new ideas of merging human intelligence with machine. --Sissi (4/2/2017)
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
Javier E

On the Controllability of Artificial Intelligence: An Analysis of Limitations | Journal... - 0 views

  • In order to reap the benefits and avoid the pitfalls of such a powerful technology it is important to be able to control it. However, the possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established
  • In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI cannot be fully controlled
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  •  
    ELIEZER YUDKOWSKY
1 - 6 of 6
Showing 20 items per page