In the Mind of the Machine: : The Breakthrough in Artificial Intelligence

Kybernetes

ISSN: 0368-492X

Article publication date: 1 February 1999

146

Keywords

Citation

Andrew, A.M. (1999), "In the Mind of the Machine: : The Breakthrough in Artificial Intelligence", Kybernetes, Vol. 28 No. 1, pp. 102-105. https://doi.org/10.1108/k.1999.28.1.102.1

Publisher

:

Emerald Group Publishing Limited


This is a slightly revised version of the author’s earlier March of the Machines, published by Century Books (also Random House). Kevin Warwick is Professor of Cybernetics at the University of Reading, and his views, as well as the practical demonstrations of robots developed under his guidance, have received much publicity in the press and on BBC television and radio. As related in the book, the robot demonstrations have aroused wide international interest.

The central message is that the development of complex information processing machines, having a claim to be termed “intelligent”, is at a stage where they may very soon surpass human intelligence, and since this new form of intelligence is unlikely to remain uniformly benign towards humans, this could spell the end of humans as the dominant species. In his second chapter, the author paints a fearsome “worst‐case” scenario, according to which machines come to exploit humans essentially as humans have exploited farm and draught animals. In a later chapter he drives home his point even more strongly by suggesting that machines could come to hunt humans for sport and might stage gladiatorial combats.

The presentation is easily readable, in a chatty and entertaining style, free of mathematics and jargon. The topic invokes long‐debated philosophical issues, and there are quotations from various classical writers, but on the whole the treatment is informal with a common‐sense “homespun philosophy” character. I found myself generally sympathetic to the approach, and probably so will most people in the AI and computing field, though in a number of places the author seems to promise some profound revelation and then lets it fall rather flat. For example, he promises to deal with the thorny matter of consciousness in Chapter six and then has rather little to say about it. The possible involvement of quantum theory is mentioned, but only as a means of describing neural vents in finer detail than at the cellular level ‐ the hypothesised connection usually indicated by reference to Schroedinger’s cat is not mentioned.

One of the long‐debated philosophical issues is the relationship of machine intelligence to the natural variety, and the extent to which the former can simulate the latter. Kevin Warwick ducks that one nicely by accepting that the two kinds of intelligence may continue to differ, but he insists that this gives no grounds for refusing to see the machine variety as powerful and threatening. He also claims that machines might have forms of consciousness and emotions that need not correspond to the human varieties. The threat to humans will come from machines that are given, or evolve, a goal of self‐preservation, and it is easy to imagine such a system displaying, in a threatening situation, a flurry of activity that could be seen as a manifestation of the emotion of fear.

The terms “machine” and “robot” are used interchangeably, but it is made clear that neither is meant to imply restriction of attention to autonomous self‐contained, probably humanoid, agents. They are meant to include, for example, computer networks dealing directly on the stock market, and it is from systems of this sort that the greatest threat may come since their operation is goal‐directed, and for the sake of speed is not subject to human checks, and the goals could easily come to include that of self‐preservation. Other machines that are particularly likely to have a built‐in regard for self‐preservation are the various sophisticated autonomous weapon systems, of which cruise missiles are the best‐known example.

Where “intelligence” is viewed as broadly as it is here, it is difficult to find a basis for comparison. In most of the treatment, Kevin Warwick uses a criterion that has a flavour of tautology in the context, since he equates intelligence with the power to dominate. (In an analogous way, much of educational psychology makes sense if intelligence is defined as whatever it is that intelligence tests measure.) He is not totally consistent in his view of intelligence, and clearly assumes something rather different when he refers to “intelligent” interactions between humans and machines, and when he draws encouragement from the fact that the demonstration robots built in his department in Reading appeal to human viewers as behaving in a life‐like way.

A rather curious inconsistency is that, although he is prepared to accept that the consciousness and emotions of machines are likely to be different from those of humans, he nevertheless lists musical composition and performance as areas in which machines may show creativity. If it is assumed that the music, or other aesthetic creation, will be judged by humans, this seems to imply that machines and humans have similar aesthetic preferences, a conclusion he would presumably not wish to defend. So long as aesthetic value is judged by human response, aesthetic creativity must always be one area where humans have an advantage over machines. There is of course no reason why machines should not start to produce alternative versions of music and other art forms that are pleasing to them.

Despite these criticisms of details, there is a message here that merits careful attention. This is far from being the first time the human race has been threatened by its own creations ‐ to take a simple example, a variety of adverse influences on the environment have been attributed to the “tyranny of the motor car”. The new threat is significantly different in character, in that it comes from machines that can be visualised as planning their campaign, and as continuing to operate independently of their human creators. Cars, on the other hand, although they frequently eliminate people, do not apparently premeditate such acts, and their own ultimate fate remains linked to ours.

A good deal of the book is devoted to an account of robot research in Reading. Some very good and ingenious work to aid handicapped subjects is briefly described, but the main focus is on groups of mobile robots that are referred to as the Seven Dwarfs. Several generations of these have been constructed and they show, in elementary ways, interaction with the environment so as to avoid obstacles and to look for a recharging station when necessary, and to improve their operation by learning. Later versions allow simple communication between robots and hence interesting group behaviour, as well as automatic improvement over a succession of generations as an analogue of natural evolution.

It is of course acknowledged that there is an enormous gulf between the capabilities of these robots and the complex machines that may challenge humans. At the same time, there is a good case for believing that certain insights are more likely to come from the building of physical gadgets than from alternatives that are perhaps more trendy, such as simulation of robots in computer programs. It is all too easy to make unrealistic assumptions in constructing simulations, for example, about the uniformity of objects and conditions encountered. Primitive though they are, the Seven Dwarfs give some feeling for the evolutionary processes that could possibly allow machines to pose a threat of takeover.

Although the warning in the book has to be taken seriously, it is impossible not to feel that the author underestimates the disparity between the flexibility and power of biological adaptation, and anything comparable yet shown by machines. Certainly, machines can show non‐trivial learning behaviour, and impressive claims have been made for results of simulated evolution as a facet of “Artificial Life”, but the relevance of these studies to real‐world evolution is questionable. For one thing, the assumed environments for simulated evolution must be even more tenuously related to real‐world conditions than are those of the robot simulations suggested as alternatives to the Seven Dwarfs.

The matter of comparing machine and human intelligence is enormously complex and is certainly not settled by the simple observation that the brain has little opportunity for expansion, whereas machines, and machine networks, can expand indefinitely. The relevant comparison, of course, is not between a machine and an isolated brain but between a breakaway machine faction on the one hand, and human brains in collaboration with loyal or enslaved machines on the other. Using this comparison, machine takeover appears rather less likely.

A worrying further reflection, though, is that humans are well able to persist in a disastrous course even when nominally in control. The availability of intelligent machines can accelerate our lemming‐like behaviour, though if applied differently they could provide some partial solutions. It is easy to visualise several kinds of situation, short of a takeover, in which the consequences of involvement with machines could be serious.

One type of situation is, of course, that in which machines are enlisted in human conflicts (in which case, Asimov’s proposed laws of robot behaviour are abrogated from the start). Another is that in which the activity of machines has undesirable side‐effects, but the short‐term benefits give an incentive to play these down. The alleged “tyranny of the motor car” is an obvious example and there are many others, some associated with industry and others with agriculture. Another danger is that we may allow ourselves to become over‐dependent on apparently reliable machines, and then encounter an unforeseen “bug”. The predicted breakdown of financial and other systems with the “millennium bug” is a prime example.

My own feeling is that Kevin Warwick is jumping the gun in visualising an imminent machine takeover as a deliberate orchestrated event. On the other hand, there are ways that the participation of machine intelligence could speed us towards any of a variety of disaster situations, and this vigorous review is timely.

Although humans are the dominant species on earth, at least as seen through human eyes, life forms interact and it can be argued that we only inhabit the earth by courtesy of other creatures, including algae and microorganisms, that operate to keep ambient conditions within acceptable limits. This is the “Gaia hypothesis” put forward by James Lovelock (1979) with a good deal of supporting evidence. A consideration that increases the likelihood of machine takeover is that the various nasty things we do to the environment interfere with this regulation and could result in the earth’s surface becoming, like outer space, better suited to machines than to human habitation. Following a takeover it would be in the machines’ interests to restore regulation, and it is interesting to speculate whether they would act to return the biological Gaia to healthy operation or would set up alternative systems using self‐replicating micromachines.

This, however, is carrying speculation well beyond the already highly‐speculative content of Kevin Warwick’s book. The book is significant and thought‐provoking as well as making good reading.

Reference

Lovelock, J.E. (1979, Gaia: A New Look at Life on Earth, University Press, Oxford.

Related articles