In an LSTM network I'm passing as feature an array of the form
X.shape
(350000, 240, 1)
With a binary categorical target of the form
y.shape
(350000, 2)
How can I estimate the optimal batch size to minimize learning time without losing accuracy?
Here's the setup:
model = Sequential()
model.add(LSTM(25, input_shape=(240, 1)))
model.add(Dropout(0.1))
model.add(Dense(2, activation='softmax'))
model.compile(loss="binary_crossentropy", optimizer="rmsprop")
model.fit(X_s, y_s, epochs=1000, batch_size=512, verbose=1, shuffle=False, callbacks=[EarlyStopping(patience=10)])