[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: unable to replicate bp results with leabra
Octavio Lopez <firstname.lastname@example.org> writes:
> The problem is, the neurons just seem to freeze on
> constant 1 activation and no learning takes place.
> I've tried playing around with the K and pct K values
> and this doesn't seem to do much. I have one input
try reducing the dt.vm parameter to .15 or .1 instead of .2 in the
leabra unit spec -- this is the time constant for updating the
membrane potential, which is computed recursively so if happens too
fast it goes all out of whack. the default of .2 is at the high end
of tolerable values (so that settling happens as fast as possible) but
this means that it can be too high depending on your architecture,
activation patterns and other things that affect the size of the
conductances in the units..
> layer with feedforward connectivity to a hidden layer
> to an output layer with bi-directional connectivity.
> it goes like this 10->50->1. Does anyone have some
> tips on how to get leabra to work properly? Thanks.
Leabra nets with 1 output are a bit strange -- leabra is designed to
activate k out of N units in a generally binary manner, so 1 out of 1
means that you'll just have the one output unit active all the time.
to represent linear activation values, you might want to use a
linearunitspec, or, more suited to leabra's strengths, a 1-out-of-N
way of representing values.