To read this content please select one of the options below:

Artificial neural nets and BCL

Alex M. Andrew (Reading University, Earley, Reading, UK)

Kybernetes

ISSN: 0368-492X

Article publication date: 1 January 2005

273

Abstract

Purpose

Attention is drawn to a principle of “significance feedback” in neural nets that was devised in the encouraging ambience of the Biological Computer Laboratory and is arguably fundamental to much of the subsequent practical application of artificial neural nets.

Design/methodology/approach

The background against which the innovation was made is reviewed, as well as subsequent developments. It is emphasised that Heinz von Foerster and BCL made important contributions prior to their focus on second‐order cybernetics.

Findings

The version of “significance feedback” denoted by “backpropagation of error” has found numerous applications, but in a restricted field, and the relevance to biology is uncertain.

Practical implications

Ways in which the principle might be extended are discussed, including attention to structural changes in networks, and extension of the field of application to include conceptual processing.

Originality/value

The original work was 40 years ago, but indications are given of questions that are still unanswered and avenues yet to be explored, some of them indicated by reference to intelligence as “fractal”.

Keywords

Citation

Andrew, A.M. (2005), "Artificial neural nets and BCL", Kybernetes, Vol. 34 No. 1/2, pp. 33-39. https://doi.org/10.1108/03684920510575726

Publisher

:

Emerald Group Publishing Limited

Copyright © 2005, Emerald Group Publishing Limited

Related articles