[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Almeida-Pineda algorithm
I have seen this in a number of cases -- in my analysis it is due to
weights becoming asymmetric and large, which happens if the problem is
too difficult. In particular, any time the network really needs to
use the recurrent weights, these problems occur. I don't think it has
anything to do with the implementation, but on the other hand, I
haven't heard of anyone else's experiences similar or different on
other platforms... I've been thinking about implementing a
symmetrization verion of APBp to correct this. I also tried capping
the weight values by setting the wt_range.max and min but it didn't
help that much, which makes me think it is more about symmetry. Hope
this helps and I'll let you know when I get around to doing the
symmetry thing, if you don't get to it first..
Padraic Monaghan <email@example.com> writes:
> I've been using the APBp algorithm in PDP++ in a number of
> different simulations.
> In all of them, the learning of the network seems a little
> odd. I wonder if anyone else has similar experience? After
> learning the task reasonably well, but never perfectly, the
> MSE suddenly and inexplicably (at least to me) shoots off to
> a level greater than the initial error. This has been the
> case for several very different simulations, but all using
> very large training sets.
> Has anyone experienced anything similar? Does anyone know
> why it might be doing this?