I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result?
3 Answers
The code is full of random.randint()
everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result.
- Y_train, X_test, X_train are generated randomly
- Using
adam
as optimizer, means you'll be performing stochastic gradient descent. With a random beginning point of the iterations in order to converge. - A batch_size of 8 means you will run batches consisting of 8 randomly selected samples.
Solution:
- Set a random seed in your code to have always the random values generated with
np.random.seed()
- Doesn't generate much of an issue although minor deviations
- Same as 2.
If I find a way to have consistente sampling methods for the batch_size
/epoch
issue I will edit my answer.

- 17,835
- 6
- 23
- 53
-
Hey thanks for responding.. Firstly, i am sorry that i posted the reference code and did not provide complete information. Actually, I made some modifications to the reference code by using my own data which is not random.X_train, y_train, X_test, y_test are all generated from another source. Despite using my own data, i am still getting different results everytime. – Mitra Lanka Oct 04 '19 at 18:07
-
Please read my thoroughly, since it's not the same as where you placed your other comment. – Celius Stingher Oct 04 '19 at 18:07
-
Sorry about that! I am trying to understand your explanation but could not figure it out..in order to have sampling method, do i have to implement different combinations of batch size and no of epochs? – Mitra Lanka Oct 04 '19 at 18:30
-
1The thing is, batch_size and epochs are made by using random sampling techniques and this can affect the overall performance. Here you can find more information: https://stackoverflow.com/questions/4752626/epoch-vs-iteration-when-training-neural-networks – Celius Stingher Oct 04 '19 at 18:34
-
Thank you for the explanation..and the link – Mitra Lanka Oct 06 '19 at 03:13
There are lots of random arrays in there. Use np.random.seed()
to get the same ones each time. For example:
np.random.seed(42)
for _ in range(3):
print(np.random.random(3))
Every time you run this code, you'll get the same result. On my machine:
[0.37454012 0.95071431 0.73199394]
[0.59865848 0.15601864 0.15599452]
[0.05808361 0.86617615 0.60111501]
Note that lots of other bits of the machine learning pipeline use randomization too. For example:
- Splitting into train, validation and test datasets with
train_test_split()
. - Setting initial weights in a neural network.
- Optimization pathways.
Most ML functions allow you to pass a seed as an argument. Have a look in the documentation. Depending on what you are doing, and which libraries you're using, you may or may not be able to make the entire pipeline reproducible.
You might also like this article or this one about getting reproducible results with Keras.

- 7,614
- 1
- 23
- 36
-
Hey thanks for responding.. Firstly, i am sorry that i posted the reference code and did not provide complete information. Actually, I made some modifications to the reference code by using my own data which is not random.X_train, y_train, X_test, y_test are all generated from another source. Despite using my own data, i am still getting different results everytime. – Mitra Lanka Oct 04 '19 at 18:04
-
1
-
Thanks for providing the links..I have used those commands. Now, results are stable – Mitra Lanka Oct 06 '19 at 03:14
-
You have to set the seed for different things like numpy, cuda etc.. Then your program gives same result. Use below function
def set_random_seed(random_seed):
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(random_seed)
random.seed(random_seed)
Then call with a seed set_random_seed(1234)
This will give you same result no matter on which machine you run. You can change the seed if you want. Different seed
results in different results.

- 19
- 6
-
Please don't add the [same answer to multiple questions](//meta.stackexchange.com/q/104227). Answer the best one and flag the rest as duplicates, once you earn enough reputation. If it is not a duplicate, [edit] the answer and tailor the post to the question. – double-beep Aug 12 '23 at 11:14