Researchers Want Robots To Empower Both Humanity and Themselves

0
420

Further developments into artificial intelligence opens up more avenues of designing the artificial mind. One group of researchers are focusing on empowerment.

Recently, a group of researchers from the University of Hertfordshire presented a paper on how empowerment could be used as a guide for artificial intelligence behaviour. It differs in how broad a spectrum it aims to cover in the design of artificial minds. The main purpose of the research is to help create a model that primarily strives to benefit humanity, rather than the other way around.

Artificial intelligence could help everyone get along, it could help us understand and communicate with other humans more effectively. But it could also bring humanity’s chapter in this universe to an abrupt end. That’s why scientists across the world are attempting to come up with a set of rules that can effectively maintain human superiority. Even powerful scientific figures like Stephen Hawkings and the founder of Tesla, Elon Musk, have publically expressed concern on the impact that artificial intelligence can and will have on human life. A lot of this criticism, however, is based upon the currently established theories surrounding how the rules should be laid down for general artificial intelligence. For those not in the know, general artificial intelligence is the robot equivalent of a human, a self-conscious robot, which is a bit different from the ones we have today, which we still consider in high regard when they win in a game of Go. No disrespect to the game, it’s highly complex and the achievement is still impressive, but that’s compared to modern standards and not compared to making intelligent decisions while fully aware of your surroundings which is vastly different in scope. However, we seem to be moving towards general intelligence at an alarming rate.

The Three Laws of Robotics

In their presentation, the scientists say having artificial intelligence basing decisions on empowerment principles could prove in our favor. In essence, it argues that though it would always look for ways to empower itself, it would also always maintain a primary directive of empowering human beings.

Stephen Hawking has spoken several times about his concerns regarding AI. PHOTO CREDIT: LWP KOMMUNIKÁCiÓ

Empowerment is a bit different from many other theories, many of which are based on the Three Laws of Robotics, as established in a series of novels written by science-fiction author Isaac Asimov in 1942. The Three Laws of Robotics is likely the most established paradigm of how artificial intelligence should relate to its surroundings. All three were originally written by Asimov and though you may not know exactly what they look like you’ll probably find the three rules below pretty familiar.

“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

“We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment” – Cristoph Salge, University of hertfordshire

Even if you haven’t seen them written exactly like this you’ve probably seen them in some other form. They’ve appeared in many works of fiction, both in literature and film. You may remember the science fiction horror movie Aliens 2 from 1986 (if you don’t, at least read about it) in which an android aboard the ship adheres to similar rules. They’ve also appeared on Simpsons. Though the rules were laid down with the best of intentions, much criticism has been raised concerning how easy it would be to misinterpret them. Especially considering that robots, thus far, don’t have the ability to interpret their meaning broadly enough to properly implement them.

Empowerment Is Complimentary

To delve deeper into empowerment, and paint a clearer picture of how it differentiates from previously established rules, we will finish this piece by delving into some details provided by the researchers. Basically, the robots would keep their options open and seek ways in which they could create the greatest potential influence on their perceivable world. Speaking to Science Daily, Christoph Salge, one of the researchers, says “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement.” He then says, “For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.” Don’t worry, the last part is probably less worrisome than it sounds. We think.

Conceptually, the group developed empowerment back in 2005. But in a recent milestone, they expanded upon the concept to include the empowerment of a human being. Essentially, they consider it necessary for a robot to see the world through the eyes of humans, to understand us, and in turn know how to help us. “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment,” says Salge. Empowerment as a principle isn’t meant to replace other directives, according to Salge, “Ultimately, I think that Empowerment might form an important part of the overall ethical behaviour of robots,” he says.

Advertisment