I am building a convolutional network which takes input of big 3d array as input. as the array is too big (60000,100,100) my computer is raising a memory error when I am initializing the input. can i Train the model in batches? like input (1000,100,100) 60 times maybe, so that i don't need too remember data used to train, so memory that can be saved.
I am facing this problem because i am trying to deal with a huge data set and I am vectorizing the words in it.
X_train = np.zeros((train.shape[0],length, vector_size), dtype=K.floatx())## this line raises memory error as this is of shape (60000,100,100)
#some other code to calculate word embeddings and fill those numbers in X-train and Y_train
convmodel = Sequential()
convmodel = Sequential()
convmodel.add(Conv1D(32, kernel_size=3, activation='elu', padding='same', input_shape=(length, vector_size))) #length = 100,vector_size=100
convmodel.add(Conv1D(32, kernel_size=3, activation='elu', padding='same'))
convmodel.add(Dropout(0.25))
convmodel.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
convmodel.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
convmodel.add(Dropout(0.25))
convmodel.add(Flatten())
convmodel.add(Dense(256, activation='tanh'))
convmodel.add(Dropout(0.3))
convmodel.add(Dense(2, activation='softmax'))
convmodel.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.0001, decay=1e-6),
metrics=['accuracy'])
model.fit(X_train, Y_train, #size of x_train is (66000,100,100)
batch_size=128,
shuffle=True,
epochs=10,
validation_data=(X_test, Y_test),
callbacks=[EarlyStopping(min_delta=0.00025, patience=2)])