next up previous contents
Next: Overview of Our Approach Up: Introduction and Overview Previous: Scaling Issues

Historical Context

 

  Although the field of computational cognitive neuroscience is relatively young, its boundaries are easily blurred into a large number of related disciplines, some of which have been around for quite some time. Indeed, research in any aspect of cognition, neuroscience, or computation has the potential to make an important contribution to this field. Thus, the entire space of this book could be devoted to an adequate account of the relevant history of the field. This section is instead intended to merely provide a brief overview of some of the particularly relevant historical context and motivation behind our approach. Specifically, we focus on the advances in understanding how networks of simulated neurons can lead to interesting cognitive phenomena, which occurred initially in the 1960s and then again in the period from the late `70s to the present day. These advances form the main heritage of our approach because, as should be clear from what has been said earlier, the neural network modeling approach provides a crucial link between networks of neurons and human cognition.

    The field of cognitive psychology began in the late 1950s and early `60s, following the domination of the behaviorists. Key advances associated with this new field included its emphasis on internal mechanisms for mediating cognition, and in particular the use of explicit computational models for simulating cognition on computers (e.g., problem solving and mathematical reasoning; NewellSimon72). The dominant approach was based on the computer metaphor, which held that human cognition is much like processing in a standard serial computer.

          In such systems, which we will refer to as ``traditional'' or ``symbolic,'' the basic operations involve symbol manipulation (e.g., manipulating logical statements expressed using dynamically-bound variables and operators), and processing consists of a sequence of serial, rule-governed steps. Production systems became the dominant framework for cognitive modeling within this approach. Productions are essentially elaborate if-then constructs that are activated when their if-conditions are met, and they then produce actions that enable the firing of subsequent productions. Thus, these productions control the sequential flow of processing. As we will see, these traditional, symbolic models serve as an important contrast to the neural-network framework, and the two have been in a state of competition from the earliest days of their existence.

          Even though the computer metaphor was dominant, there was also considerable interest in neuronlike processing during this time, with advances like: (a) the McCullochPitts43 model of neural processing in terms of basic logical operations; (b) Hebb49 theory of Hebbian learning and the cell assembly, which holds that connections between coactive neurons should be strengthened, joining them together; and (c) Rosenblatt58 work on the perceptron learning algorithm, which could learn from error signals. These computational approaches built on fundamental advances in neurobiology, where the idea that the neuron is the primary information processing unit of the brain became established (the ``neuron doctrine''; Shepherd92), and the basic principles of neural communication and processing (action potentials, synapses, neurotransmitters, ion channels, etc.) were being developed. The dominance of the computer metaphor approach in cognitive psychology was nevertheless sealed with the publication of the book Perceptrons [MinskyPapertMinsky-Papert1969MinskyPapert69], which proved that some of these simple neuronlike models had significant computational limitations -- they were unable to learn to solve a large class of basic problems.

      Grossberg78 Kohonen77 Anderson95 AmariMaginu88 Willshaw81 While a few hardy researchers continued studying these neural-network models through the `70s (e.g., Grossberg, Kohonen, Anderson, Amari, Arbib, Willshaw), it was not until the `80s that a few critical advances brought the field back into real popularity. In the early `80s, psychological [McClellandRumelhartMcClelland-Rumelhart1981McClellandRumelhart81, e.g.,,] and computational [HopfieldHopfield1982Hopfield82, HopfieldHopfield1984Hopfield84] advances were made based on the activation dynamics of networks. Then, the backpropagation learning algorithm was rediscovered by RumelhartHintonWilliams86Nat (having been independently discovered several times before: BrysonHo69,Werbos74,Parker85) and the Parallel Distributed Processing (PDP) books [RumelhartMcClellandPDP Research GroupRumelhartETAL1986cRumelhartMcClelland86, McClellandRumelhartPDP Research GroupMcClellandETAL1986McClellandRumelhart86] were published, which firmly established the credibility of neural network models. Critically, the backpropagation algorithm eliminated the limitations of the earlier models, enabling essentially any function to be learned by a neural network. Another important advance represented in the PDP books was a strong appreciation for the importance of distributed representations [HintonMcClellandRumelhartHintonETAL1986HintonMcClellandRumelhart86], which have a number of computational advantages over symbolic or localist representations.  

      Backpropagation led to a new wave of cognitive modeling (which often goes by the name connectionism). Although it represented a step forward computationally, backpropagation was viewed by many as a step backward from a biological perspective, because it was not at all clear how it could be implemented by biological mechanisms [CrickCrick1989Crick89, ZipserAndersenZipser-Andersen1988ZipserAndersen88]. Thus, backpropagation-based cognitive modeling carried on without a clear biological basis, causing many such researchers to use the same kinds of arguments used by supporters of the computer metaphor to justify their approach (i.e., the ``computational level'' arguments discussed previously). Some would argue that this deemphasizing of the biological issues made the field essentially a reinvented computational cognitive psychology based on ``neuronlike'' processing principles, rather than a true computational cognitive neuroscience.

  In parallel with the expanded influence of neural network models in understanding cognition, there was a rapid growth of more biologically oriented modeling. We can usefully identify several categories of this type of research. First, we can divide the biological models into those that emphasize learning and those that do not. The models that do not emphasize learning include detailed biophysical models of individual neurons [TraubMilesTraub-Miles1991TraubMiles91, BowerBower1992Bower92], information-theoretic approaches to processing in neurons and networks of neurons [AbbottLeMassonAbbott-LeMasson1993AbbottLeMasson93, AtickRedlichAtick-Redlich1990AtickRedlich90, AmitGutfreundSompolinskyAmitETAL1987AmitGutfreundSompolinsky87b, AmariMaginuAmari-Maginu1988AmariMaginu88, e.g.,,], and refinements and extensions of the original Hopfield82,Hopfield84 models, which hold considerable appeal due to their underlying mathematical formulation in terms of concepts from statistical physics. Although this research has led to many important insights, it tends to make less direct contact with cognitively relevant issues (though the Hopfield network itself provides some centrally important principles, as we will see in chapter 3, and has been used as a framework for some kinds of learning).

  The biologically based learning models have tended to focus on learning in the early visual system, with an emphasis on Hebbian learning [LinskerLinsker1986Linsker86, MillerKellerStrykerMillerETAL1989MillerKellerStryker89, MillerMiller1994Miller94, KohonenKohonen1984Kohonen84, HebbHebb1949Hebb49]. Importantly, a large body of basic neuroscience research supports the idea that Hebbian-like mechanisms are operating in neurons in most cognitively important areas of the brain [BearBear1996Bear96, BrownKairissKeenanBrownETAL1990BrownKairissKeenan90, CollingridgeBlissCollingridge-Bliss1987CollingridgeBliss87]. However, Hebbian learning is generally fairly computationally weak (as we will see in chapter 5), and suffers from limitations similar to those of the 1960s generation of learning mechanisms. Thus, it has not been as widely used as backpropagation for cognitive modeling because it often cannot learn the relevant tasks.

      In addition to the cognitive (connectionist) and biological branches of neural network research, considerable work has been done on the computational end. It has been apparent that the mathematical basis of neural networks has much in common with statistics, and the computational advances have tended to push this connection further. Recently, the use of the Bayesian framework for statistical inference has been applied to develop new learning algorithms [DayanHinton, NealZemelDayanETAL1995DayanHintonNealZemel95, SaulJaakkolaJordanSaulETAL1996SaulJaakkolaJordan96, e.g.,,], and more generally to understand existing ones. However, none of these models has yet been developed to the point where they provide a framework for learning that works reliably on a wide range of cognitive tasks, while simultaneously being implementable by a reasonable biological mechanism. Indeed, most (but not all) of the principal researchers in the computational end of the field are more concerned with theoretical, statistical, and machine-learning kinds of issues than with cognitive or biological ones.

  In short, from the perspective of the computational cognitive neuroscience endeavor, the field is in a somewhat fragmented state, with modelers in computational cognitive psychology primarily focused on understanding human cognition without close contact with the underlying neurobiology, biological modelers focused on information-theoretic constructs or computationally weak learning mechanisms without close contact with cognition, and learning theorists focused at a more computational level of analysis involving statistical constructs without close contact with biology or cognition. Nevertheless, we think that a strong set of cognitively relevant computational and biological principles has emerged over the years, and that the time is ripe for an attempt to consolidate and integrate these principles.

 


next up previous contents
Next: Overview of Our Approach Up: Introduction and Overview Previous: Scaling Issues

Randall C. O'Reilly
Fri Apr 28 14:15:16 MDT 2000