I'm running a series of neural networks (Keras library using a Tensorflow backend), and I have the following results for the time it took to train each neural network in Jupyter Notebook:
ELAPSED TIME: 2.7005105018615723
0
ELAPSED TIME: 2.4810903072357178
1
ELAPSED TIME: 2.801435708999634
2
ELAPSED TIME: 2.6753993034362793
3
ELAPSED TIME: 2.8625667095184326
4
ELAPSED TIME: 2.5828065872192383
5
while later on you have:
ELAPSED TIME: 5.062163829803467
0
ELAPSED TIME: 5.162402868270874
1
ELAPSED TIME: 5.301288366317749
2
ELAPSED TIME: 5.386904001235962
3
ELAPSED TIME: 6.126806020736694
4
The program consists of a function that trains a separate neural network model on their respective datasets, and only exports their final training accuracy (saved to another file).
I thought the reason why it took longer for the latter networks to train was because the program was consuming too much memory, so I would delete the models (with the del keyword) after having obtained their training accuracy, but that doesn't seem to be doing much.
If I were to restart the Jupyter Notebook kernel, the time to run each network would shorten back to about 2 seconds (original duration), but it takes longer for the latter models to run.
What could be possible reasons for this, and what solutions could be implemented?
.
NOTE: I did not include any code because it would make this post more dense, but if necessary I can upload it.