I am using Keras/TensorFlow (GPU) to create a time series forecasting model. I have 100x of time series and want to train a network for each of them.
Running a few time series in a row is fine, but once I run 100x or 1000x then it appears that the training time of each model increase slowly (but surely). Is there a simple reason for this ?
Below is code to reproduce the issue (note that it could take a while to run).
https://gist.github.com/mannsi/c5666c4b786c35c3443beea6d13a32fe
On my machine the first iteration takes 10s, iteration #250 takes 16s and iteration #500 takes 25s.
I am new to Neural Networks and Keras/TF so maybe this is totally normal but I did not factor this in when doing my back-of-the-envelope time calculations.
System info:
- python 3.5
- keras (1.2.2)
- tensorflow-gpu(1.0.0)
EDIT: I tested the same code on a TensorFlow CPU backend and I see the exact same behavior there.