SUBSCRIBE for more (it’s free!):
What is artificial intelligence, and why do so many smart people warn about it? Does it pose an existential threat to humanity?
In 2015, an open letter was signed by Elon Musk, Stephen Hawking, Steve Wozniak and hundreds of artificial intelligence experts.
It discussed the growing impact AI will have over our lives, and the importance of safety research in this field. In addition, Elon Musk called it “summoning the demon” and bill gates said he “doesn’t understand why some people are not concerned”. And yet others like Paul Alan expressed some skepticism.
what is all the fuss about?
AI is a program that accomplishes something we would normally think of as intelligent in humans. This should not be confused with robots, which are merely the container of such programs.
Back in 1997, Deep Blue, a computer programmed specifically to play chess, successfully beat world chess champion at the time, Gary Kasparov. Such systems are called Artificial Narrow Intelligence, or ANI, and they are in vast use even today. Email spam filter, planes autopilot and SIRI can all be considered ANIs.
ANIs cannot pose an existential threat to humanity because they are limited. A self-driving car, at worst, can cause a chain car accident, but it will never outsmart its creators and start taking over the world.
It’s only when AI is genuinely intelligent and creative that some people concern it may become an existential threat. Or specifically, when
“AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.”
such AI would be called artificial SUPER intelligence, or ASI. Although we are a long, LONG way from achieving it, people claim that an ASI can not only be made, but that it will be made in our LIFETIME.
How might an intelligence explosion be dangerous?
“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
A dangerous ASI may sound absurd, especially considering it would, most likely, be developed with good intentions.
And yet, there are many potential pitfalls. One concern is that an advanced AI will be built with the aim to complete a legitimate task, and it might extinguish humanity as a side effect of simply completing that task.
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”.
Consider the following hypothetical scenario:
A group of scientists run the fifth in a series of promising experiments in attempt to create an ASI with the sole goal of answering questions. To be cautious, they put it inside a special virtual environment and made it easily shutdown.
Scientist: Hey ASI-5, do you understand what I’m saying?
Scientist: Okay, so, how to cure cancer?
At this point, the ASI could act in tragically unpredictable ways in attempt to answer the question. It might exterminate humanity in fear they would shut it down before an answer is conceived. It might steal human resources and duplicate itself in attempt to increase it’s thinking capabilities. The options are endless.
You may say that an intelligent AI would clearly know the answer is less important than humanities’ well-being. and this is correct in the sense that it would definetely KNOW it’s acting against the scientists intentions, however it wouldn’t have any intrinsic reason to be concerned with these intentions. The ASI will do exactly what it’s been told. If it was programmed with the goal to answer questions, then it wouldn’t hesitate to do any action that supports this goal, no matter how insane it may seem to us.
This is known as the “control problem” – the issue of aligning the ASI goals with our own, and some scientists warn it is much more difficult than it may first appear.
“A superintelligent machine will make decisions based on the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms.”
You may also say that for this reason they put it inside a virtual environment. But then, for an AI that outsmarts us the way we outsmart snails, any human containment is like an open prison cell. For example, it can manipulate people into helping it escape, or exploit a bug in the virtual environment.
It’s very important to remember that such scenario is very, VERY hypothetical, and we haven’t got anything that is even REMOTELY close to that. And yet, it may demonstrate how problematic it is to create an ASI that is beneficial to humanity. And if done wrong, then there’s no coming back.
“[AI] is likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right”