Evolution and Optimum Seeking

Kybernetes

ISSN: 0368-492X

Article publication date: 1 November 1998

205

Keywords

Citation

Andrew, A.M. (1998), "Evolution and Optimum Seeking", Kybernetes, Vol. 27 No. 8, pp. 975-978. https://doi.org/10.1108/k.1998.27.8.975.2

Publisher

:

Emerald Group Publishing Limited


This is an extremely valuable and comprehensive treatment of optimum‐seeking methods that can be realised as computational procedures and therefore as computer programs. The emphasis is on methods that do not require evaluation of derivatives. The general plan follows that of the earlier work of the same author, from the same publisher, Numerical Optimization of Computer Models (1981, translated from the German edition of 1977), but with substantial revision and extensions. As the notes on the dust‐cover claim, it is undoubtedly the most comprehensive work of its kind.

It is claimed in the dust‐cover notes that the book offers a taxonomy of optimisation tasks and solution principles. In effect it does, but it is by thorough analysis and discussion rather than in the tabular form that the term “taxonomy” might lead a reader to expect.

The author’s interest in optimisation stems from his research student work in Berlin dating from 1963. Mathematically intractable problems of aerodynamics were solved by an optimisation method that arguably models biological evolution. This particular evolution strategy (ES), devised and developed by students in Berlin, is associated with the name of Rechenberg who was one of their number. At the same time very similar ideas were being developed independently in other places, but this work was one of the origins of Evolution Strategies of ES.

The Evolution Strategy is clearly the author’s great enthusiasm, but he leads up to it with a thorough review of other optimising and hill‐climbing strategies, including Fibonacci search and other systematic search methods in one or more dimensions. Linear programming (LP) is treated as well as its extensions to quadratic and non‐linear programming. Rather strangely, the specialisations of LP as the transportation and assignment problems are not explicitly mentioned, though it is acknowledged that the variables may be Boolean, which is an essential characteristic of assignment.

Hill climbing methods are also reviewed thoroughly, and among others the work of Rosenbrock and Storey in 1960 receives honourable mention. The place of randomness in search procedures is discussed, with the conclusion that it can be useful, and prejudice against it is unwarranted. Some classical methods, including even the simplex method for LP, are sometimes presented with random features designed to allow escape from trapping states. Ashby’s Homeostat is mentioned as a paradigm of a random search scheme.

The use of randomness is connected with the question of local versus global search. One way of improving the chances of reaching a global optimum is the multi‐start approach typified by Box’s “Complex” scheme. The latter also has some of the characteristics of an evolutionary method, in that a family of operating points exists simultaneously.

The thorough review includes important work that was in progress in the USSR at the time of the first IFAC Congress in 1960. There are references to Tsypkin, Feldbaum and Tsetlin and to Chichinadze from the Georgian republic and Ivakhnenko from Kiev.

Eventually, in Chapter 5, the author expounds fully the ES scheme of Rechenberg, in several variations. The first is the two‐membered strategy, where the existing family members, representing operating points, are restricted to one parent and one offspring at any given time. A multi‐membered strategy is more powerful and is described in several variations. There is, for example, a choice as to whether elements are eliminated after a finite preset lifespan, or persist until they are eliminated in competition with others.

Initially, these strategies are described without the periodic rotation of axes that is characteristic of Rosenbrock’s method, but which is expensive computationally. By having the magnitudes of the components of exploratory steps chosen randomly from Gaussian distributions, the effective steps can be in any direction in the hyperspace without the need to rotate axes. Nevertheless the search may be facilitated by suitable periodic rotation of the axes along which the step components are chosen, and hence of the hyper‐ellipsoid of step probability. The ES method can be implemented with or without such rotation.

The ES approach is justified by the “proof of the pudding” observation that it is effective, as well as by the fact that it models aspects of the clearly efficient process of biological evolution. Evidence is quoted showing that biological evolution can arrive at precise optima, with examples taken from details of the vascular system. It is also suggested that the evolutionary process has itself evolved to be near‐optimal.

In Chapter 5 there is also treatment of the genetic algorithm (GA) approach, and of simulated annealing and Tabu search. GA is essentially an ES but with continuous quantities represented by separate binary digits, combined either as normal binary numbers or according to Gray code. Numerical optimisation then meets with curious features of the function space referred to as “Hamming cliffs”.

Chapter 6 is concerned with comparison of methods, and begins by asking whether the ES method is the universal solution to optimisation problems. The answer has to be “not always”. Where the function to be optimised is known to be of a particular form, for example a polynomial of known degree, special methods exploiting this may converge faster than the general method. However, the special methods may go seriously wrong if the function is not of the assumed form, and the ES technique offers a happy combination of efficiency with robustness. Biological evolution, similarly, has to combine these two aspects.

As in the earlier volume, there is a truly formidable amount of data on runs of programs implementing the various techniques, applied to a series of test problems. The test problems are described in detail, with attractive three‐dimensional graphic representations of many of the functions.

Because of its extensive review of the subject, there is a large and very useful bibliography, and Chapter 8 is entirely given over to references. There are no less than 73 pages of references, with about 16 to a page, making a total of well over a thousand.

Implementations of thirteen of the methods have been programmed in FORTRAN, and then converted to the C language. The disk supplied with the book contains versions in both languages. It also has code to evaluate any of nine problem functions, as well as a supervisory item called OptimA which allows the user to select a method and a problem and to enter all the data needed for a trial run. The methods represented include Fibonacci search (as no. 0), Rosenbrock’s method (as no. 4), Box’s “Complex” method (as no. 10), the two‐member ES (as no. 11) and the multi‐member ES (as no. 12).

All of this is easily installed on a PC, and OptimA can be activated, but the programs do not run unless a C compiler is also installed. At the time of writing this review I have not tried out the programs, but I have no doubt that they operate as described. The disk contents represent a valuable research facility since the source programs are accessible, so that modified versions of the methods, and/or new test functions, can be prepared by anyone familiar with C.

The book is a magnificent effort and a valuable source of information and ideas relating to automatic optimisation. Apart from its technical value where computer optimisation is wanted, its implications for the interpretation of biological evolution should not be overlooked.

Related articles