Pandora's box

Industrial Robot

ISSN: 0143-991x

Article publication date: 28 August 2007

284

Citation

Loughlin, C. (2007), "Pandora's box", Industrial Robot, Vol. 34 No. 5. https://doi.org/10.1108/ir.2007.04934eaa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2007, Emerald Group Publishing Limited


Pandora's box

There has recently been a lot of press coverage associated with the drafting of “Robot Laws”. The original laws are generally attributed to Isaac Asimov, although he himself credited John W. Campbell with their formalisation in 1940:

  1. 1.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm;

  2. 2.

    A robot must obey orders given to it by human beings except where such orders would conflict with the First Law; and

  3. 3.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov did however add the fourth law (1985) which, because it is placed above the other three is known as the Zeroth Law:

    0.  &nbsp A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

And added “unless this would violate the Zeroth Law of Robotics” to the end of the first law.

The recent interest in these laws has been sparked by the proclaimed intent of the South Korean Government, and separately the European Robotics Research Network (www.euron.org), to draw up ethical guidelines to prevent people from abusing robots and vice versa.

Asimov's books imaginatively test the various laws by placing robots in conflicting situations, but his robots all benefited from Positronic brains that gave them powers of thought that are well beyond our current capabilities.

Rather than get into discussion about the detail of the laws themselves I would like to throw in a few lateral thoughts of my own.

Notwithstanding our own rather doubtful moral high ground given that human beings have repeatedly shown themselves to have little regard for the protection of human life, we first need to decide whether or not the robot will have the necessary thought process capability to actually understand the laws. If it does not then the laws do not apply to the robot itself but to whoever controls it - which means us humans – the people who built and programmed it.

Anything else is nonsense – if a UniRobo 2000 that is welding up a car suddenly goes nuts and injures a person then what do you do? Reprimand it severely and slam it in jail? It will not care and what's more it will not even know it does not care.

On the other hand if we have given the robot autonomy and a sufficiently advanced intelligence to know what it is doing and to understand the laws to the extent that it would know it had broken them; and it then harms a human being – then it is all still nonsense.

Various military robots such as the Foster-Miller Talon and the IAI/Lahav Guardium are capable of carrying and firing lethal weapons. At the moment they are teleoperated which basically means that it is a person pulling the trigger. This in itself has opened the possibility of real war being reduced to the moral involvement of a video game.

Taking this to the next stage governments are currently openly working on the development of unmanned autonomous vehicles – ostensibly for the delivery of supplies or medical aid. If these robots are given the power to defend themselves (anything else would seem illogical) then they would break Asimov's laws. They would not know they were breaking the laws but if we paraphase existing human laws “ignorance of the law is no defence”.

So we might have a situation where an autonomous munitions delivery vehicle has opened fire while under attack and killed somebody. What do you then do? Have the UN impound the vehicle and subject it to a disciplinary tribunal and five years in a State penitentiary?

All of these crazy punishments have no effect whatsoever unless the robot also has feelings and desires the same freedoms and bill of rights that (most of) the rest of us take for granted. If it were to have this level of intelligence then it will be hard to argue that we should not treat it as a human being. If we were to treat these robots as second class citizens then they would be nothing more than slaves – and in the UK we abolished that in 1807.

In my view we are nowhere near to creating a robot that has feelings and sufficiently developed thought processes to be able to make a conscious decision to follow the Laws let alone decide to break them. Until then the buck stops with us – we alone are responsible for how robots react with people. We cannot hide behind a set of laws and start blaming the technology we have created. And if we do ever have the means and will to give robots this level of consciousness then the buck still stops with us. You can delegate authority but not ultimate responsibility.

Ethical guidelines are all well and good but we must remember that in the final analysis we will be responsible for these robot's actions. In 1925, The Geneva Protocol laid down a ban on bacteriological warfare and this was ratified by the UN in 1989. Nothing short of a UN Resolution banning the creation of autonomous weapons is going to have any effect.

At the moment we do not have the technological key to open the Pandora's box of machine intelligence – we certainly have a lot to sort out before we do.

Clive Loughlin

Related articles