I've been using the APBp algorithm in PDP++ in a number of different simulations.
In all of them, the learning of the network seems a little odd. I wonder if anyone else has similar experience? After learning the task reasonably well, but never perfectly, the MSE suddenly and inexplicably (at least to me) shoots off to a level greater than the initial error. This has been the case for several very different simulations, but all using very large training sets.
Has anyone experienced anything similar? Does anyone know why it might be doing this?
-- Padraic Monaghan Institute for Adaptive and Neural Computation Division of Informatics, University of Edinburgh http://www.iccs.informatics.ed.ac.uk/~pmon