next up previous contents
Next: General Issues in Computational Up: Introduction and Overview Previous: Historical Context

Overview of Our Approach


  This brief historical overview provides a useful context for describing the basic characteristics of the approach we have taken in this book. Our core mechanistic principles include both backpropagation-based error-driven learning and Hebbian learning, the central principles behind the Hopfield network for interactive, constraint-satisfaction style processing, distributed representations, and inhibitory competition. The neural units in our simulations use equations based directly on the ion channels that govern the behavior of real neurons (as described in chapter 2), and our neural networks incorporate a number of well-established anatomical and physiological properties of the neocortex (as described in chapter 3). Thus, we strive to establish detailed connections between biology and cognition, in a way that is consistent with many well-established computational principles.

    Our approach can be seen as an integration of a number of different themes, trends, and developments [O'ReillyO'Reilly1998OReilly98]. Perhaps the most relevant such development was the integration of a coherent set of neural network principles into the GRAIN framework of McClelland93. GRAIN stands for graded, random, adaptive, interactive, (nonlinear) network. This framework was primarily motivated by (and applied to) issues surrounding the dynamics of activation flow through a neural network. The framework we adopt in this book incorporates and extends these GRAIN principles by emphasizing learning mechanisms and the architectural properties that support them.

    For example, there has been a long-standing desire to understand how more biologically realistic mechanisms could give rise to error-driven learning [HintonMcClellandHinton-McClelland1988HintonMcClelland88, MazzoniAndersenJordanMazzoniETAL1991MazzoniAndersenJordan91, e.g.,,]. Recently, a number of different frameworks for achieving this goal have been shown to be variants of a common underlying error propagation mechanism [O'ReillyO'Reilly1996aOReilly96]. The resulting algorithm, called GeneRec, is consistent with known biological mechanisms of learning, makes use of other biological properties of the brain (including interactivity), and allows for realistic neural activation functions to be used. Thus, this algorithm plays an important role in our integrated framework by allowing us to use the principle of backpropagation learning without conflicting with the desire to take the biology seriously.

    Another long-standing theme in neural network models is the development of inhibitory competition mechanisms [KohonenKohonen1984Kohonen84, McClellandRumelhartMcClelland-Rumelhart1981McClellandRumelhart81, RumelhartZipserRumelhart-Zipser1986RumelhartZipser86, GrossbergGrossberg1976Grossberg76, e.g.,,]. Competition has a number of important functional benefits emphasized in the GRAIN framework (which we will explore in chapter 3) and is generally required for the use of Hebbian learning mechanisms. It is technically challenging, however, to combine competition with distributed representations in an effective manner, because the two tend to work at cross purposes. Nevertheless, there are good reasons to believe that the kinds of sparse distributed representations that should in principle result from competition provide a particularly efficient means for representing the structure of the natural environment [BarlowBarlow1989Barlow89, FieldField1994Field94, OlshausenFieldOlshausen-Field1996OlshausenField96, e.g.,,]. Thus, an important part of our framework is a mechanism of neural competition that is compatible with powerful distributed representations and can be combined with interactivity and learning in a way that was not generally possible before [O'ReillyO'Reilly1998OReilly98, O'ReillyO'Reilly1996bOReilly96phd].

      The emphasis throughout the book is on the facts of the biology, the core computational principles just described, which underlie most of the cognitive neural network models that have been developed to date, and their interrelationship in the context of a range of well-studied cognitive phenomena. To facilitate and simplify the hands-on exploration of these ideas by the student, we take advantage of a particular implementational framework that incorporates all of the core mechanistic principles called Leabra (local, error-driven and associative, biologically realistic algorithm). Leabra is pronounced like the astrological sign Libra, which emphasizes the balance between many different objectives that is achieved by the algorithm.

To the extent that we are able to understand a wide range of cognitive phenomena using a consistent set of biological and computational principles, one could consider the framework presented in this book to be a ``first draft'' of a coherent framework for computational cognitive neuroscience. This framework provides a useful consolidation of existing ideas, and should help to identify the limitations and problems that will need to be solved in the future.

    Newell90 provided a number of arguments in favor of developing unified theories of cognition, many of which apply to our approach of developing a coherent framework for computational cognitive neuroscience. Newell argued that it is relatively easy (and thus relatively uninformative) to construct specialized theories of specific phenomena. In contrast, one encounters many more constraints by taking on a wider range of data, and a theory that can account for this data is thus much more likely to be true. Given that our framework bears little resemblance to Newell's SOAR architecture, it is clear that just the process of making a unified architecture does not guarantee convergence on some common set of principles. However, it is clear that casting a wider net imposes many more constraints on the modeling process, and the fact that the single set of principles can be used to model the wide range of phenomena covered in this book lends some measure of validity to the undertaking.

      Chomsky65 and Seidenberg93 also discussed the value of developing explanatory theories that explain phenomena in terms of a small set of independently motivated principles, in contrast with descriptive theories that essentially restate phenomena.

next up previous contents
Next: General Issues in Computational Up: Introduction and Overview Previous: Historical Context

Randall C. O'Reilly
Fri Apr 28 14:15:16 MDT 2000