Superhuman AI Is Not a Myth

4 min read

AI

Most readers of Data Driven Investor will, I suspect, have heard of Artificial Superintelligence: a future Artificial Intelligence that is (much) more intelligent than even Albert Einstein or whoever your favorite genius is. Important thinkers like billionaire entrepreneur Elon Musk and the late physicist Stephen Hawking have warned us that such an AI might doom humanity. Although I am optimistic, I share these worries: by definition, a superintelligent AI will be very good at pursuing its goal, granted that it’s programmed with one. Unless it’s also programmed to be friendly towards humans (so-called Friendly AI), an Artificial Superintelligence might, in order to reach its goal, perform actions that hurt humanity. They might even result in human extinction. On the other hand, Artificial Superintelligence could invent human immortality too. It really depends on the circumstances. It is because of the huge impact, either positive or negative, that I am concerned about this topic.

In an article called “The Myth of a Superhuman AI”, published on Wired, Kevin Kelly argues that such a takeover scenario by superhuman AI rests on assumptions that have no evidence. As I think Kevin is mistaken, I decided to write this post as a reaction. Kevin has a number of assumptions that, according to him, need to be true in order for a superintelligence to arise “soon”. Let’s discuss them one by one.

Artificial Intelligence is already getting smarter than us, at an exponential rate

Kevin Kelly states that the first assumption needed for the rise of superhuman AI is that AI is already getting smarter than us, and does so at an exponential rate. I’m not here to argue whether this assumption itself is false or true. What I can say is that it would be very hard to determine its truth, because it depends on many things: the exact definition of intelligence (domain-specific, like Chess, or general), the amount of researchers in the past, the present and the projected amount for the future, etc. Most people will agree that AI is getting smarter, though. And unexpected breakthroughs do happen. For example, AI in the form of AlphaZero is now way better than humans at Go, while in the past, humans could easily beat computers at this game. I can’t determine whether the progress is exponential; but given previous breakthroughs, it wouldn’t surprise me if AI makes another leap to human-level intelligence in the next ten years and to superintelligence soon after.

We’ll make AI’s into a general purpose intelligence, like our own.

https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/)

I obviously agree with Kevin that this assumption is needed for the rise of superintelligence. I also think that this assumption is true. We will build Artificial General Intelligence: AI that is as smart as humans across there intellectual domains. While it is true that AI’s build today are mostly Narrow AI’s, designed for specific tasks only, the earlier example of AlphaZero is already relatively general: it can teach itself to play Go, Chess and Shogi. Any continued progress in making AI’s more general will lead to Artificial General Intelligence. Whether this will be “soon” depends, of course, on the definition of “soon”. The (monetary) rewards for building general AI will be huge though, and tech companies like Google certainly realize this. I suspect that the more progress we make with realizing Artificial General Intelligence, the more money will be spent on its development, speeding up progress.

We can make human intelligence in silicon.

Well, this is a controversial subject. We have certainly modeled/imitated a (large) number of parts of our intelligence. I see no reason to assume this won’t happen for the rest of our intelligence, as (most of) our intelligence happens in the brain, which consists entirely of neurons – groups of which have already been modeled. Whether intelligence in silicon will ever be exactly human is difficult to say, but it’s quite certain that our level of intelligence can be created in silicon. To say this can’t happen is to say there is something special about our brains relative to silicon that makes it impossible to model, and current evidence from neuroscience doesn’t suggest this is the case.

Intelligence can be expanded without limit.

This certainly doesn’t need to be true in order for a superintelligence to arise. All that is necessary is that it is possible to be more intelligent than humans. Given the small size and low signaling speed of our brains, it seems silly to think we humans have the highest possible level of intelligence.

Once we have exploding superintelligence it can solve most of our problems.

AI

Well, it’s called superintelligence for a reason. Intelligence refers at least in part to one’s ability to solve problems, so a superintelligence will, by definition, be better at that than humans. Kevin defends his position by saying problems need more than intelligence to be solved: experiments need to be done. While this is certainly true and will slow down progress made by the AI, it doesn’t mean these problems won’t be solved. If a problem can in principle be solved, there is a level of intelligence high enough to solve it.

Kevin Kelly goes on to talk about five heresies he thinks have more evidence in support. Most of these I more or less agree with or have discussed before – the first one, however, is still worth discussing.

Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

I agree that intelligence can be measured along multiple dimensions. This, however, does in no way mean “smarter than humans” is meaningless. If one AI plays only Chess and another plays only Go, it might be impossible to say which is smarter in general. However, another AI that plays both Chess and Go better than both of the first two AI’s is definitely more intelligent than the first two. So even though intelligence is measured along multiple dimensions here (Chess and Go), it is still possible to determine a level of intelligence. One can even imagine the fourth AI that’s better at Chess than the third AI and just as good at Go as the third one; this AI would be even more intelligent than the third one.

Conclusion

Superintelligent AI is coming. Whether it is in the next ten or fifty years (or even further in the future), it will change our society strongly and permanently. Whether the impact will be positive or negative is up to us. The results could be either human extinction or human immortality, so we need to think about this carefully, and as we don’t know the timeframe, we should think about it NOW.

Hein de Haan My name is Hein de Haan. An Artificial Intelligence expert, I am concerned with the future of humanity. As a result, I try to study as much as possible about many different topics in order to have a positive impact on society.

Leave a Reply

Your email address will not be published. Required fields are marked *