Robot ethics

Industrial Robot

ISSN: 0143-991x

Article publication date: 27 April 2012

829

Citation

Virk, G.S. (2012), "Robot ethics", Industrial Robot, Vol. 39 No. 3. https://doi.org/10.1108/ir.2012.04939caa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited


Robot ethics

Article Type: Viewpoint From: Industrial Robot: An International Journal, Volume 39, Issue 3

The development of technology has always been saddled with ethical issues as most innovations can be used in a myriad of ways by individuals, but which may be viewed as “inappropriate” by others. Ethical uses of robots are becoming increasingly relevant as robotics evolves from its traditional tight industrial manufacturing roots to the much wider provider of “services”. Questions are beginning to be asked as to what “services” are OK to be provided and what are unacceptable. In principle, such “robot ethics” have been with us since the Czech playwright Karel Capek coined the term “robot” in his 1921 play Rossum’s Universal Robots (RUR) about mass-market robot androids being used for labour in a utopian society, but eventually rebelling and killing all humans. Isaac Asimov published his book “Runaround” in 1940, where he formulated the three laws for robots with the primary objective being that robots should not cause harm to humans, etc. as a means of defining a “moral code” for robots. Such rules of morality have been used in many works of science fiction to explore a variety of futuristic utopian society scenarios, but most end up with dystopian conclusions.

Although such works of fiction are interesting, they are quite far from the truth. Real robots have been in existence for manufacturing applications only since 1962 when General Motors started to use Unimate’s robots in its production lines; this led to the development of a small but stable robot industry which has been running ever since. Not much has changed over the past 30 years until the recent emergence of service robots, leading to the possibility of mass-market robot products which are expected to be launched in 2013 when the ISO 13482 safety standard for personal care robots is published. As the market launch of these new types of robots approaches, interest in robot ethics is beginning to grow. Although there are a large variety of ethical questions that are arising, the main issues are the following:

  • How can human safety be ensured in all robot applications?

  • How to prevent the use of robots in “undesirable applications and scenarios” (to replace human labour, as sex devices, military uses, and for creating super-humans).

  • How the dignity of the user (and the robot?) can be ensured in personal care robot applications; the rights of the robot is receiving more and more attention especially as the robots improve in their cognitive capabilities and biological “components” are used to build robots.

  • Should self-learning robots be able to develop their own (unlimited) capabilities as well as have self-evolving personalities?

Some aspects of these issues are clearly premature because robot capabilities are still quite primitive and guaranteeing robots the same respect, rights and dignity as humans, do not seem appropriate yet. Korea and Japan have already started activities on robot ethics and have used Asimov’s three laws as the starting point; Korea is formulating a robot ethics charter and Japan has produced its ten principles of robot laws along the lines of moral and legal rules and regulations. In a similar way, professional organisations such as the IEEE have created a robot ethics committee and are running regular international events to debate the issues. Within Europe, the EC has funded a few projects on robot ethics (Ethicbots and ETICA) where a roadmap for robot ethics has been formulated by focussing on the human ethics of the robot’s designers, manufacturers and users.

The issues are becoming important in the UK also, and it has been recently decided to set up a UK Robot Ethics Group within the BSI national committee on robot standardization so that a UK forum for this area can be created. The Group is chaired by Dr MO Tokhi (University of Sheffield, UK; o.tokhi@sheffield.ac.uk), and involves experts from engineering and the Consumer & Public Interest Network. The group will be starting its detailed work soon within the UK and linking to the various international activities in this area. It is clear that the ethical issues will grow and, although initially, the task is to agree on the rules and regulations needed for ensuring human safety in the new emerging applications, the focus will turn to formulating socially acceptable criteria.

The current ageing societal problems are driving the international community to deeply investigate and develop the robotic assistive technologies needed; this is appropriate since the demand for new mobility and cognitive aids are expected to be huge as elderly persons need to be able to maintain a good quality of life. However, as already stated, the same technologies can be used in ways that some people would regard as unethical; for example using a robotic system to replace human contact will be widely seen to be a negative step, and one that we should not allow. However, the question here is, “Where does the responsibility for the ethical decisions lie?” Is it with the robot designer, the robot manufacturer, the robot user, or society as a whole? At an individual level, even in this simple example, a single elderly person living alone may see a robotic companion as the only social interaction available to him/her and s/he is willing to buy such a device if it was available; hence, is it fair for society to stop such innovations being developed and made available to individuals? On the other hand, we have to question if all innovations should be powered by such market and supply drivers? This may not be entirely correct as we have a variety of rules and regulations within society and robot products must comply with these. It would therefore not be appropriate for the development of robot criminals designed to carry out human illegal activities on behalf of people, etc.

Clearly there are no simple answers to these complex cases, but only society as a whole is able to set the boundaries which must be regulated by law to define what can and what cannot be done ethically for designing and using the new emerging robots. For this to happen, much discussion and debate have to take place so that some consensus guidelines can emerge; we are only at the starting line and the key capabilities of robot artificial intelligence and their autonomy capabilities are likely to arise in the next decades. As this happens the ethical issues will grow and become even more complex. We must therefore keep a close eye on this area and make sure robot advances are not hindered by society’s normal precautionary principles but, at the same time, truly unacceptable applications of robots are not widely promoted.

Gurvinder S. VirkUniversity of Gävle, Sweden, CLAWAR Association Limited, UK

About the author

Gurvinder S. Virk is a Professor of Robotics and the Built Environment, University of Gävle, Sweden (gurvinder.virk@hig.se) and Chairman, CLAWAR Association Limited, UK (gsvirk@clawar.org).

Related articles