[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Seeding the random number generator
On Mon, 7 Oct 2002, Randall C. O'Reilly wrote:
> firstname.lastname@example.org writes:
> > To put this in context, I'm training a 5x5 SOM, for 400 iterations,
> > with a gradually decreasing circular neighbourhood and learning
> > rate. 5 runs from different initialisations end up with the same
> > weights within 100 iterations. However the weights the networks have
> > still change from iteration to iteration...
> > The initial learning rate is 0.9 and it reduces by 1/400th of the
> > initial value each iteration (actually reaches 0 at end of final
> > iteration). The neighbourhood radius starts at 5 and decreases
> > similarly, though never goes below 1.
> > Anyway, should this *ever* happen?! It seems very strange to me...
> Could happen, especially if you don't have the wrap flag set -- in
> this case there could be a unique mapping in the hidden layer of your
> data, and the network will learn it! If you do have wrap set then
> there shouldn't be a unique configuration and I'd be puzzled..
This is a standard SOM -- there is no hidden layer. I'm presenting
over 220,000 input patterns (distributed over 8936 sequences) to
it. The input layer is a set of leaky integrators that get reset at
the start of each sequence. ISTM impossible that there is 1 unique
attractor -- after all shouldn't the mirror images of a solution exist
as well? I.e. just reversing the order of the rows or columns in the
SOM should work just as well as the original order. And that's before
going in to the probability of being able to reliably get exactly the
same weights from different starting points even with such a strong
ISTM there's a serious bug somewhere -- hopefully only in the way I've
set things up...