Next: Interactivity Up: Motivating Cognitive Phenomena and Previous: Parallelism

Figure: Example of graded nature of categorical representations: Is the middle item a cup or a bowl? It could be either, and lies in between these two categories.

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature. We will see in the next chapter that neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs (compared to other inputs it could have received). The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true. In the example in figure 1.4, a neuron could convey that the first object pictured is almost definitely a cup, whereas the second one is maybe or sort-of a cup and the last one is not very likely to be a cup. Similarly, people tend to classify things (e.g., cup and bowl) in a graded manner according to how close the item is to a prototypical example from a category [RoschRosch1975Rosch75].

Figure: Graded activation values are important for representing continuous dimensions (e.g., position, angle, force, color) by coarse coding or basis-function representations as shown here. Each of the four units shown gives a graded activation signal roughly proportional to how close a point is along the continuous dimension to the unit's preferred point, which is defined as the point where it gives its maximal response.

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values like position, angle, force, and color (wavelength). The brain tends to deal with these continua in much the same way as the continuum between a cup and a bowl. Different neurons represent different ``prototypical'' values along the continuum (in many cases, these are essentially arbitrarily placed points), and respond with graded signals reflecting how close the current exemplar is to their preferred value (see figure 1.5). This type of representation, also known as coarse coding or a basis function representation, can actually give a precise indication of a particular location along a continuum, by forming a weighted estimate based on the graded signal associated with each of the ``prototypical'' or basis values.

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other -- instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contributions). This fact gives rise to the phenomenon of graceful degradation, where function degrades ``gracefully'' with increasing amounts of damage to neural tissue. Simplistically, we can explain this by saying that removing more neurons reduces the strength of the signals, but does not eliminate performance entirely. In contrast, the CPU in a standard computer will tend to fail catastrophically when even one logic gate malfunctions.

A less obvious but equally important aspect of gradedness has to do with the way that processing happens in the brain. Phenomenologically, all of us are probably familiar with the process of trying to remember something that does not come to mind immediately -- there is this fuzzy sloshing around and trying out of different ideas until you either hit upon the right thing or give up in frustration. Psychologists speak of this in terms of the ``tip-of-the-tongue'' phenomenon, as in, ``its just at the tip of my tongue, but I can't quite spit it out!'' Gradedness is critical here because it allows your brain to float a bunch of relatively weak ideas around and see which ones get stronger (i.e., resonate with each other and other things), and which ones get weaker and fade away. Intuition has a similar flavor -- a bunch of relatively weak factors add up to support one idea over another, but there is no single clear, discrete reason behind it.

Computationally, these phenomena are all examples of bootstrapping and multiple constraint satisfaction. Bootstrapping is the ability of a system to ``pull itself up by its bootstraps'' by taking some weak, incomplete information and eventually producing a solid result. Multiple constraint satisfaction refers to the ability of parallel, graded systems to find good solutions to problems that involve a number of constraints. The basic idea is that each factor or constraint pushes on the solution in rough proportion to its (graded) strength or importance. The resulting solution thus represents some kind of compromise that capitalizes on the convergence of constraints that all push in roughly the same direction, while minimizing the number of constraints that remain unsatisfied. If this sounds too vague and fuzzy to you, don't worry -- we will write equations that express how it all works, and run simulations showing it in action.

Next: Interactivity Up: Motivating Cognitive Phenomena and Previous: Parallelism

Randall C. O'Reilly
Fri Apr 28 14:15:16 MDT 2000