I have a neural network running in pybrain and I'm happy enough with the correctness of it, now I just want to improve the accuracy. Until I begin experimenting with the various parameters, however, I want to be sure I am starting this exploration from the same point each time.
If I understand correctly, PyBrain randomly initialises the network weights. How can I keep this randomness consistent, i.e. if nothing changes then I should get the same output each time I run the network? Then I can be certain that any improvement gained is a direct result of the parameter altered.
I looked at this answer which recommends using NetworkWriter but I don't think this is really what I want.
I thought there would be some way of just seeding the network the same away each time I run it, but perhaps I'm mistaken.