The AI Stop Button Problem

0
53
AI Stop Button

A Twitter war rages between Facebook founder Mark Zuckerberg and entrepreneur Elon Musk over the future impact of AI on human civilization. Which one is right when it comes to super-intelligent AI?

Elon Musk has earlier voiced his fears over AI becoming too intelligent and one of his supporters is no other than eminent Professor Stephen Hawking. Zuckerberg on the other hand said in a recent Facebook-live event that Musk is just shouting wolf over the whole situation, accusing Musk of being a “naysayer” and prohibitive of the whole industry.

Non-tech oriented media have likened the quarrel to that of other media personalities like Kanye West, not really showing a deep understanding of the matter at hand. Therefore, let us break down why someone like Elon Musk would stand before the National Governors Association, asking them to thoroughly regulate AI.

AI is already all around us, in one form or the other. It’s in self-driving cars, autopilots in airplanes, in board game go-champions’ worst nightmares and so on. However, this is not the type of AI Elon Musk is referring to. Instead, it is the prospect of a self-thinking machine, able to come up with its own solutions to problems. A machine that becomes self-aware, capable of language and understanding abstract concepts. These types of AI are inevitably becoming a reality. The aforementioned types of AI are fed massive amounts of data and can draw parallels from it, however they’re not really self-aware as of yet.

The problems with intelligent machines are illustrated in a truly pedagogical way in the video below by Computerphile.

In summary

Self-aware AI need to have a stop-button in case they are doing something we don’t want them to. This creates a series of problems that we must be overcome.

If the robot is about to kill a baby on its way to the goal you set for it, you want to push the button. However, the robot doesn’t want you to push the button as the reward for the pushing the button is zero, whereas the original goal of say, fetching a cup of tea is valued higher. In this scenario, the robot wants to stop you from pushing the button as there will be no reward if it was to be shut down.

Should you instead assign an equal reward for the stop-button to be pushed, the robot might push the button itself, instead of performing the set-out goal. Then you would perhaps place the button where the robot cannot reach it. However, it could then try to trick you into pushing the button by running over a baby or doing other things it knows you don’t want it to and this is where it becomes really dangerous.

Even if you made the robot unaware of the existence of a stop-button, it would eventually put two-and-two together. It’s a super-intelligent AI, remember? Once it knows about the stop-button, the above problems would just repeat themselves. Finally, you could try to assign a slightly lower reward to the stop-button compared to the goal itself; though, if you’ve programmed the robot to take the shortest route to solving a problem, it would still try to press the button.

Stephen Hawking & Elon Musk
Stephen Hawking (left) & Elon Musk are both warning us about the implications of super-intelligent AI

The greatest minds on the planet are trying to find solutions to these highly philosophical, yet relevant problems before someone is actually hurt by AI. Also, as Stephen Hawking put it “It would take off on its own, and re-design itself at an ever-increasing rate,” implying that we humans are hindered by our slow biological evolutions and couldn’t dream to compete with AI.

Elon Musk proposes that we begin regulating the research into AI as soon as possible. Perhaps building a way to come around the stop-button problem directly into AI, before it’s too late. One of these approaches is to tell the machine that a human is the only way of knowing what constitutes a reward or not, that itself cannot know the answer. This is a novel way to approach the problem but serves not as a perfect solution. Elon Musk is reportedly working on human-machine interface systems called the Neuralink, arguing that instead of making machines smarter, we should empower humans instead. Thus, the problem of all-powerful general AI would be solved all together, not unlike the theories of Empowerment put forward by University of Hertfordshire researchers we wrote about earlier.

What do you guys think about AI? Will humans inevitably be destroyed by our own creations or? Let us know with a comment below!

Advertisment