0

I have a neural network running in pybrain and I'm happy enough with the correctness of it, now I just want to improve the accuracy. Until I begin experimenting with the various parameters, however, I want to be sure I am starting this exploration from the same point each time.

If I understand correctly, PyBrain randomly initialises the network weights. How can I keep this randomness consistent, i.e. if nothing changes then I should get the same output each time I run the network? Then I can be certain that any improvement gained is a direct result of the parameter altered.

I looked at this answer which recommends using NetworkWriter but I don't think this is really what I want.

I thought there would be some way of just seeding the network the same away each time I run it, but perhaps I'm mistaken.

Community
  • 1
  • 1
Philip O'Brien
  • 4,146
  • 10
  • 46
  • 96

1 Answers1

1

You can initialize the network weights by passing it an array of weight values. This is best shown in this answer:

https://stackoverflow.com/a/14206213/5288735

If you want the weights to have "consistent randomness" you can create your array of weight vectors using random.seed(). Implementing this properly will generate random values that will be the same for each seed value. This will allow your network to have random weight values that are consistent.

Community
  • 1
  • 1
A. Dev
  • 350
  • 1
  • 2
  • 7