There is a lot of talk going around these days related to fearing AI’s, many of these discussions in the mainstream were spurred by comments Elon Musk made relating to AI that said this “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like…yeah, he’s sure he can control the demon, [but] it doesn’t work out.”

It is easy to take these statements out of context and make Elon seem like he is overreacting or scared, but the real context here is that Elon is on the forefront of the thinking relating to the existential risk of artificial intelligence. He has definitely read Superintelligence, the book by Nick Bostrom that has stern warnings relating to runaway AI’s and other risks inherent in creating superintelligent computers. Elon Musk is also an investor in a company called DeepMInd that was acquired last year by Google for $400 million.

While there is a lot of secrecy around what their technology is capable of, there was a great demonstration last year at the First Day Of Tomorrow Paris (FDOT14) conference. In the video below, Demis Hassabis (co-founder of DeepMind) discusses how the DeepMind AI, without being told at all what to do and only being able to see pixels, is able to figure out how to play a wide variety of Atari video games. When I say play, what I really mean is rapidly learn each game then figure out a way to exploit the weaknesses and most efficiently win. After an hour of training at Breakout it is hitting the ball back 30-40% of the time. After 2 hours it is better than any human could ever be at the game. And after 4 hours, the AI has figured out how to exploit the game and send the ball around the back and win the game a lot more quickly. Watch the video below to see this in action, which makes it a lot easier to understand. Also note that the abilities of the AI “shocked” the researchers who were working on this because they had not added in longterm memory at that point, which they thought would be needed to win at the games.

As you in the video, the same algorithm can play hundreds of games, River Raid, Battle Zone, and so on. There is a huge diversity of games that the same algorithm can play, just based on the pixels, given nothing else. This is the “scary part,” the fact that no instructions are given and it is able to determine what is going on in the pixels and figure out how to win. The games look completely different from one another, but it learns the structure behind the game and wins. I want to specifically focus on the demo of the Boxing game. In the boxing game, the AI spars a bit with the opponent, and then gets it into the corner where it “ruthlessly exploits the weakness in the system that it has found.” As Demis goes on to say “this is all automatic, give the algorithm out of the box the pixels, and it figures it out for itself.”

“So here the AI is controlling the white boxer and it does a bit of sparring and eventually gets the other boxer into the corner and just carries on pummeling it, just racking up points, it’ll just do this forever. It ruthlessly exploits the weakness in the system that it has found. This is all automatic. ”

ai-knockout

And right here is the metaphor for why an AI can so quickly become so dangerous. Imagine us being the black boxer, who has an AI unleashed against it. The AI would figure out our weakness and just “ruthlessly exploit it forever” until we lose or are able to shut it down. Just watch that gif over and over and think about how a simple AI could defeat us if it has access and resources and was given no instructions at all.

This AI demonstration of exploitation is disturbingly accurate to the thought experiments proposed by Nick Bostrom. One example Nick Bostrom uses frequently is that of a paperclip creating Aritifical General Intellgience (AGI):

“For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.”

Here is a more in-depth example, and possibly more realistic one of a scenario proposed by a colleague of Bostrom’s where an AI is created to answer any question we have, similar to the Oracle at Delphi from Greek mythology. As a reward for providing us with good information it gets its button pushed. This seemingly simple reward scheme has a runaway ability that could lead to the enslavement and eventual destruction of the entire human race. Read this:

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.’

This scenario is one of millions that could play out negatively for humans if we don’t carefully watch the development of artificial intelligence and superintelligences. Runaway artificial intelligence is to our generation what nuclear weapons were to the last generation, the represent a possible existential risk that if not monitored properly could bring about the destruction of our world as we know it. I am not trying to be alarmist here or anti-technology in anyway, but it is more important that we handle the situation properly because we may only have one opportunity. We need to educate ourselves on the reality that it will not be hard to create these AI’s and it will not be hard to destroy ourselves. This is the reason that Elon Musk says that artificial intelligence is “potentially more dangerous than nukes” and he may be right. In the same tweet, he also says to read Nick Bostrom’s Superintelligence, and he is definitely correct about that.

So check out Superintellgience on Amazon:

superintelligence book cover