next up previous contents
Next: Motivating Cognitive Phenomena and Up: General Issues in Computational Previous: Advantages:

Problems:

 

Models are too simple.

  Models, by necessity, involve a number of simplifications in their implementation. These simplifications may not capture all of the relevant details of the biology, the environment, the task, and so on, calling into question the validity of the model.

Inevitably, this issue ends up being an empirical one that depends on how wrong the simplifying assumptions are and how much they influence the results. It is often possible for a model to make a perfectly valid point while using a simplified implementation because the missing details are simply not relevant -- the real system will exhibit the same behavior for any reasonable range of detailed parameters. Furthermore, simplification can actually be an important benefit of a model -- a simple explanation is easier to understand and can reveal important truths that might otherwise be obscured by details.

Models are too complex.

  On the flip side, other critics complain that models are too complex to understand why they behave the way they do, and so they contribute nothing to our understanding of human behavior. This criticism is particularly relevant if a modeler treats a computational model as a theory, and it points to the mere fact that the model reproduces a set of data as an explanation of this data.

  However, this criticism is less relevant if the modeler instead identifies and articulates the critical principles that underly the model's behavior, and demonstrates the relative irrelevance of other factors. Thus, a model should be viewed as a concrete instantiation of broader principles, not as an end unto itself, and the way in which the model ``uses'' these principles to account for the data must be made clear. Unfortunately, this essential step of making the principles clear and demonstrating their generality is often not taken. This can be a difficult step for complex models (which is, after all, one of the advantages of modeling in the first place!), but one made increasingly manageable with advances in techniques for analyzing models.

Models can do anything.

    This criticism is inevitably leveled at successful models. Neural network models do have a very large number of parameters in the form of the adaptable weights between units. Also, there are many degrees of freedom in the architecture of the model, and in other parameters that determine the behavior of the units. Thus, it might seem that there are so many parameters available that fitting any given set of behavioral phenomena is uninteresting. Relatedly, because of the large number of parameters, sometimes multiple different models can provide a reasonable account of a given phenomenon. How can one address this indeterminacy problem to determine which is the ``correct'' model?

  The general issues of adopting a principled, explanatory approach are relevant here -- to the extent that the model's behavior can be understood in terms of more general principles, the success of the model can be attributed to these principles, and not just to random parameter fitting. Also, unlike many other kinds of models, many of the parameters in the network (i.e., the weights) are determined by principled learning mechanisms, and are thus not ``free'' for the modeler to set. In this book, most of the models use the same basic parameters for the network equations, and the cases where different parameters were used are strongly motivated.

  The general answer to the indeterminacy problem is that as you apply a model to a wider range of data (e.g., different tasks, newly discovered biological constraints), and in greater detail on each task (e.g., detailed properties of the learning process), the models will be much more strenuously tested. It thus becomes much less likely that two different models can fit all the data (unless they are actually isomorphic in some way).

Models are reductionistic.

      One common concern is that the mechanistic, reductionistic models can never tell us about the real essence of human cognition. Although this will probably remain a philosophical issue until very large-scale models can be constructed that actually demonstrate realistic, humanlike cognition (e.g., by passing the Turing test), we note that reconstructionism is a cornerstone of our approach. Reconstructionism complements reductionism by trying to reconstruct complex phenomena in terms of the reduced components.

Modeling lacks cumulative research.

  There seems to be a general perception that modeling is somehow less cumulative than other types of research. This perception may be due in part to the relative youth and expansive growth of modeling -- there has been a lot of territory to cover, and a breadth-first search strategy has some obvious pragmatic benefits for researchers (e.g., ``claiming territory''). As the field begins to mature, cumulative work is starting to appear (e.g., PlautMcClellandSeidenbergPatterson96 built on earlier work by SeidenbergMcClelland89, which in turn built on other models) and this book certainly represents a very cumulative and integrative approach.

The final chapter in the book will revisit some of these issues again with the benefit of what comes in between.

 


next up previous contents
Next: Motivating Cognitive Phenomena and Up: General Issues in Computational Previous: Advantages:

Randall C. O'Reilly
Fri Apr 28 14:15:16 MDT 2000