I've got a problem where I'd like to train on multiple time series. I have a multi-dimensional input and a univariate output. The feature I'm trying to predict is not included in the input, so I want features[:now] -> LSTM -> target[now]. I want this for all values of now
in the series. My series are of varying lengths.
I gather from Jason Brownlee's posts that input is given as a tensor of shape (n_samples, n_timesteps, n_features)
, and I've remembered/found that I can deal with different lengths of series by passing None
for n_timesteps
in the model's input_shape
. Great.
But I don't want to have to grab (features[:i], target[i])
for all i
and call fit
over and over and over again, when the network is perfectly primed to take (features[i+1], target[i+1])
and find gradients right after i
.
How can I use all my targets efficiently, without having to reset state and refit to tons of different views of my data? Why is the Keras documentation so bad that it doesn't even specify the dimensionality x
and y
are supposed to take?