Moral Machines: Teaching Robots Right from Wrong

Tom P. Abeles (Editor, On the Horizon)

On the Horizon

ISSN: 1074-8121

Article publication date: 2 February 2010

512

Citation

Abeles, T.P. (2010), "Moral Machines: Teaching Robots Right from Wrong", On the Horizon, Vol. 18 No. 1, pp. 99-100. https://doi.org/10.1108/10748121011021065

Publisher

:

Emerald Group Publishing Limited

Copyright © 2010, Emerald Group Publishing Limited


Thirty years ago, the computing capacity of today's laptops would have filled a large room and have limited capabilities. With the ever‐increasing computing power, decreasing power consumption and decreasing size, ubiquitous computing is increasingly being embraced by humans, making life‐determining decisions, and operating a variety of autonomous mobile devices, from simple carts for moving materials down office corridors to “battle bots” defusing mines and seeking enemy agents on the fields of war. Racks of computers create the virtual worlds, such as “Second Life,” and they also inhabit the spaces linking the physical and virtual worlds, playing “international” banker, cop, companion or adversary.

Wallach and Allen focus on those devices which operate in the physical environment. They can be stationary, as in sophisticated monitors in a hospital intensive care unit, or a mobile weapons platform with varying degrees of independence. In order to approach the issue, the authors have created a surrogate: an “autonomous moral agent” (AMA). In today's world, most of the capabilities of these AMAs do not yet exist, although some applications such as mobility in complex situations are already fairly common. Automated systems can, theoretically, take a plane off the runway of an international airport and land it in zero visibility across an ocean. They can also, like humans, make mistakes leading to fatal crashes. These mistakes may be caused by failed sensors reading false information or by conflicting inputs.

One of the conclusions reached by Wallach and Allen is that, even with all of their current and projected computing capabilities, computing systems are not fast enough to run through every logic tree to make a complex decision. Unlike IBM's chess playing “Big Blue,” the options are too many to trace every path and arrive at a “logical” conclusion. The issue is made clear by comparing the logic‐driven Vulcan, Mr Spock, in the series “Star Trek” with that of his human commander, Captain Kirk, whose decisions are based on “intuition,” or what might be termed “emotional heuristics:”

These emotional heuristics are “rules of thumb” that enable people to cut through complexity, frame issues, and make choices. Emotions thus play a central role in what Herbert Simon, one of the founding fathers of AI, called bounded rationality (p. 148).

Herein resides the authors' principal argument that the development of AMAs will require a multi/cross disciplinary effort between engineers and those in the humanities and sciences concerned with issues of ethics and human values. While approximately half of Moral Machines: Teaching Robots Right from Wrong works carefully through the technical design problems that give AMAs functionality within the biophysical world, the core of the book is devoted to how these units can and should respond to the human condition.

We are closer, in many ways, to the imagined capabilities of AMAs, including autonomous transport and life support systems, as well as the standard ideas of “battle bots” and android servants. Some research has yielded primitive AMAs that can read human emotions and even display these in their own “facial” and body movements. Many of these basic functions are now seen in toys available in the market.

Relationships between humans and computers have become complicated. We see this in the early AI program, Elisa (which played the role of psychologist), in soldiers becoming “attached” to their battle bots, and computers controlling power grids. Humans who are faced with the choice of whether to over‐ride the decisions made by computers highlights the issues: nuclear power plant control systems, airline autopilots and life support systems in hospitals are here today, acting autonomically to our benefit. Smart computers, embedded in humans, monitor and respond to changes in heart functions and body chemistry, such as dispensing insulin.

Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight.. (this) we argue, requires the systems themselves to make moral decisions – to be programmed with “ethical subroutines” (p. 4).

Because the authors see these issues through “Western” eyes, their conclusions and concerns focus on the idea that AMAs will have “intention” and the equivalent of “free will,” which implies a form of consciousness. Thus, the issues raised by many science fiction writers and the speculation of philosophers and social scientists converge and move from the speculative into the possible.

What this “consciousness” might be and whether it is identical to that of humans is not addressed in the book. Neither do the authors tackle the argument of the “Singularity,” where networks of computing clouds would be called upon to function autonomously from humans, even given the emotional heuristics imposed by the limits of bounded rationality. Rather, they argue that the presence of an interdisciplinary approach to computing will enable us to accomplish what has never been done before the anticipation of and positive intervention in outcomes before they are actualized.

Emerging technologies are always easier to modify before they become entrenched. However, it is not often possible to predict accurately the impact … until well after it has been widely adopted (sic) (p. 6).

The authors carefully document the current and emerging state of AI technology and the issues that this emergence presents. What is quietly gnawing in the background is the question as to what this will mean for humans. In many ways, the authors treat humans and human culture as a constant, the variables that are held constant in an experiment or analysis where, in this case, the AMAs are the object of study. Unfortunately, complexity theory says that all variables are in play and that we need to not only consider the AMAs, but humans and the larger environment which is changing at the same time. This is considerably more than just either the enthusiastic and rapturous embrace of the “coming” of this AI, or the neo‐luddite reaction to the emergence. Perhaps one of the more interesting speculations in this arena resides in the science fiction novels of Neal Stephenson, The Diamond Age and Snow Crash, where Stephenson's insights have drawn creative individuals into teams working on applications of these AMAs.

We shall not cease from explorationAnd the end of all our exploringWill be to arrive where we startedAnd know the place for the first time (T.S. Eliot).

The authors point out that for an AMA to be a fully capable ethical agent, it must have three elements: consciousness, intentionality and free will. They also carefully point out that we do not have answers to either the ontological question (what can these AMAs know) or to the epistemological question as to what we can know about these agents. The book is carefully framed to address their concerns, complete with a strong bibliography ranging from the technical to the philosophical and from theory to praxis.

Related articles