So this is the first time I'm training a Tensorflow model with my own data, and unlike the projects where I was using Tensorflow's ready-to-use datasets, it gives me the following warning:
WARNING:TensorFlow: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset.
2020-12-20 20:26:47.448822: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
[[IteratorGetNext/_2]]
This warning doesn't appear during training or when the training finishes, but it appears every time I call the model.predict() function. It doesn't seem to depend on the model hyperparameters, as I've tested it with different numbers of layers/layertypes. Note that despite the warning, the model still seems to give correct results.
Information about my dataset:
Number of samples: 4200
Type of data: 56x56 matrices with 0's or 1's
Train/test splits: None, I'm doing the splitting and testing manually
Here's the code for training the model:
model = keras.Sequential()
model.add(keras.layers.Conv2D(64, (3,3), input_shape=(56,56,1), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Conv2D(64, (3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dense(3, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10)
model.save('test_model')
EDIT: I forgot to add how I'm defining X_train and y_train, so here's that part of the code:
Directory = "Resources"
Categories = ["circles", "squares", "triangles"]
training_data = []
for category in Categories:
path = os.path.join(Directory, category)
category_num = Categories.index(category)
for img in os.listdir(path):
img_array = cv.imread(os.path.join(path, img), 0)
training_data.append([img_array, category_num])
random.shuffle(training_data)
X = []
y = []
for features, label in training_data:
features = np.array(features)
label = np.array(label)
X.append(features)
y.append(label)
X = np.array(X).reshape(-1, 56, 56, 1)
y = np.array(y)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2)
(The images in each category are as I said, 56x56 matrices of 1's and 0's)