Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet! – Andrew Ng
Elon Musk has warned us. Stephen Hawking has warned us. Many others are worried about it. Artificial Superintelligence may sound like the stuff for science fiction movies, but it could very well be a reality this century. And according to the aforementioned thinkers and myself, it could mean the end of humanity. However, many people think we shouldn’t be worried! One of them is, you guessed it, Andrew Ng.
For those of you who don’t know Andrew Ng: he both co-founded and led Google Brain and was both Vice President and Chief Scientist at Baidu, where he led the Artificial Intelligence Group. It seems fair to not just call him a Machine Learning expert: he’s more like the Machine Learning expert.
Andrew Ng is obviously a man with a lot of knowledge on the subject of AI, but when it comes to the currently still theoretical concept of Artificial Superintelligence, I think he is dangerously wrong in his assessment of its potential danger. In a response to a question posted on Quora, he said worrying about Artificial Superintelligence is unnecessary (or, at least, that we shouldn’t worry yet). Before we dive into his opinion on the matter, let’s first define Artificial Superintelligence and discuss its potential danger.
Artificial Superintelligence
In order to discuss Artificial Superintelligence, let’s start with the basics. First of all, what is intelligence? I like Legg and Hutter’s definition: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” An agent could simply be a human, an animal or, of course, a computer. In the last case, we call it Artificial Intelligence (AI). AI today is not nearly as intelligent as humans are when it comes to the wide range of environments: it has had (often superhuman) success in specific environments, like Chess. AI that is as intelligent as humans are in all their environments would be Artificial General Intelligence (AGI). If an ASI is (far) smarter than that, it’s called an Artificial Superintelligence (ASI).
Why are Musk and the others worried?
The fear around ASI might be best explained by starting with a relatively common observation: humans are ruling Earth because they are more intelligent than all the other animals. Because of this superior intelligence, we have invented technologies that help us rule, and other animals have suffered a lot because of it. If intelligence is such a determining factor in who rules this planet, then ASI, which by definition is (much) more intelligent than we are, would have the power to overrule us. Our fate would then depend on the desires of this ASI, and that will not by default work in our favor. Just think: if we want to build a house at a spot where a colony of ants have their home, do we reconsider our plan? No, we build the house anyway. Too bad for the ants!
Note that we do not (necessarily) hate the ants: we’re just more or less indifferent towards them. We’re not evil ant killers who crush every ant we can find; we just don’t rethink our plans when it turns out ants are going to die as a side effect. In the same way, an ASI might have goals that kill humanity as a side effect. There’s a thought experiment put forward by Nick Bostrom, and it’s called the paperclip maximizer. In it, there is an AGI tasked with maximizing the number of paperclips in its collection. After it upgrades its own intelligence, becoming an ASI, it eventually starts to transform more and more of Earth into paperclip manufacturing facilities, just to make more and more paperclips. That’s its goal! As a side effect, however, humanity dies. It’s not that the ASI hates humans: it’s just that it wants to use the material of our home (and probably our own bodies) to manufacture paperclips. It’s not evil in the sense that it wants to destroy humans: it just doesn’t care to let us live.
AI Alignment
The problem of making ASI do what we want is formally known as AI Alignment. Not only do we want an ASI to not kill us; we want it to create a World that’s great according to our moral values. But what are those values exactly? Most of us would probably say things like health and freedom, but even these “obvious” ones are difficult to define exactly. Now that I think about it, they actually contradict each other on quite a few occasions. Should you have the freedom to not wear your seatbelt, even though it’s more dangerous than wearing one? It could cost you your health (or indeed your life). Note that just specifying “Don’t have us killed” to the ASI could make it want to restrain us all to hospital beds being fed by machines. That’s clearly not what we actually want, but hey, we’re living, right? Mission accomplished, as far as the ASI is concerned.
Let’s be clear on one thing: we don’t (yet) know how to formally specify our moral values to an AI. We don’t even know what those values are exactly. We do know that making an ASI that’s beneficial to us – that is, aligned with our values – is more difficult than “just” creating any ASI. If we don’t actively try to solve AI Alignment first, it seems likely that some organization at some point in time builds an ASI that’s not aligned with our values. That would mean disaster for humanity.
Andrew Ng’s Opinion
On January 29, 2016, Andrew Ng answered a very important question on Quora: “Is AI an existential threat to humanity?” I hope I’ve made my answer clear already: “Yes!”. You can read Andrew’s full answer here, and it starts with this quote:
“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”
Ah, yes. It will probably be a while before humanity makes its first ASI. However, like I said before, we need to figure out how to make ASIs beneficial before we build the first ASI. And here’s the problem: we don’t know how long that will take us. Maybe it takes more time than we have before the first ASI is actually built. Therefore, we need to worry about AI Alignment right now to have the biggest chance we’re done in time.
Note that Andrew Ng is talking about “evil superintelligence”. Like I explained, ASI doesn’t need to be evil for it to be harmful to humanity. An ASI creating paperclips isn’t evil in the sense that its goal is for humanity to go extinct: the extinction happens as a side effect of the ASI pursuing its actual goal of maximizing the number of paperclips it has.
Because of the huge impact Artificial Superintelligence (if created) will have on the World, AI Alignment may be the single most important problem humanity has ever faced. Since we don’t know how long it takes to solve it, saying we don’t need to worry about it yet is simply wrong.
“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars”. I so much love this. AI is something that man can’t fully comprehend
Hello
i need this site for guest posting
hi Aliza, you can send us a message at [email protected]. but please allow some time for us to respond. We will get back to you.
Well, Andrew is aware of the ethical issues that come in hand with a Narrow AI. If anything he´s steps ahead of whatever concers non-experts and experts alike may have.