Regular Old Intelligence is Sufficient--Even Lovely - 0 views
billmckibben.substack.com/...old-intelligence-is-sufficient
ai control why benefits evaluation human crisis
shared by Javier E on 02 Apr 23
- No Cached
-
Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
-
one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
- ...18 more annotations...
-
second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
-
One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
-
That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
-
it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
-
But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
-
Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
-
The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,
-
As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
-
We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
-
All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
-
It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
-
I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
-
here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
-
n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions