0

I 've been trying to train a pre-existing model on an architectural dataset, and no matter what model i try or changes i do, i never seem to get past 60-66% (maybe 70% at best once) validation accuracy, while accuracy itself reaches up to 97%.

So far i have tried ResNet50, ResNet152, InceptionResNetV2, VGG16, i have also tried different optimizers and different image augmentations. Dataset i am using is https://www.kaggle.com/datasets/dumitrux/architectural-styles-dataset only difference is I trimmed off few classes, in particular:

  • Achaemenid architecture
  • American Foursquare architecture
  • Ancient Egyptian architecture

Without those classes the dataset contains 8935 images, for testing and validation split i am using 20% resulting in 7163 images for training and 1790 images for validation. The code i am using:

image_size = (224, 224)
batch_size = 12

aug = ImageDataGenerator(
        #rescale=1./255,
        rotation_range=10,
        #zoom_range=0.10,
        width_shift_range=0.1,
        height_shift_range=0.1,
        #shear_range=0.10,
        horizontal_flip=True,
        fill_mode="constant",
        preprocessing_function=tf.keras.applications.vgg16.preprocess_input)

raw =ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input)

train_generator = aug.flow_from_directory(
        '/content/drive/MyDrive/datasetV2',
        target_size=image_size,
        batch_size=batch_size,
        class_mode='sparse')

validation_generator = raw.flow_from_directory(
    "/content/drive/MyDrive/dataset_validationV2",
        target_size=image_size,
        batch_size=batch_size,
        class_mode='sparse',
        seed=1337)

# model = tf.keras.applications.resnet.ResNet152(
#       include_top=True,
#       weights='imagenet',
#       #weights=None,
#       input_tensor=None,
#       input_shape=None,
#       pooling=None,
#       classes=1000,
#       #classes=16
#    )
# model = tf.keras.applications.inception_resnet_v2.InceptionResNetV2(
#     include_top=True,
#     weights="imagenet",
#     input_tensor=None,
#     input_shape=None,
#     pooling=None,
#     classes=1000,
#     classifier_activation="softmax"
# )
model = tf.keras.applications.VGG16(
    include_top=True,
    weights="imagenet",
    input_tensor=None,
    input_shape=None,
    pooling=None,
    classes=1000,
    classifier_activation="softmax",
)
# model = tf.keras.applications.resnet50.ResNet50(
#     include_top=True,
#     weights='imagenet',
#     input_tensor=None,
#     input_shape=None,
#     pooling=None,
#     classes=1000,
# )
epochs = 40

callbacks = [
    keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
]
model.compile(optimizer='adagrad',loss = tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
history = model.fit(train_generator, steps_per_epoch=192, epochs=epochs, validation_data=validation_generator, callbacks=callbacks)

Note that everything commented out in the code is what i tried using with no real changes to the resulting validation_accuracy. Of course for each different model i used its own preprocessing and changed the input size to be matching the one each model is made for. I have also tried different optimizers, different batch sizes and number of steps, with same results.

Here is a graph from training InceptionResNetV2 with 32 as batch size and 256 steps, with the same image augmentations you see in the code (optimizer used was Adagrad, Adamax only made validation fluctuate up and down more)

And here is a graph from ResNet152 with Adamax optimizer (same batch and steps, of course with the image dimensions and preprocessing required for ResNet152)

Another example is this graph of when i trained VGG16 model, with batch size of 12 and 192 steps and no image augmentation, just to rule out its those that ruin my validation accuracy, once again getting stuck after 60%, if i let it run for another 40 epochs i might get to 65% but the gain has diminishing returns, eventually hovering around the same percentage.

All the trainings go like this, after 20-30 epochs val_accuracy stays between 60-68%

I have looked at different Stack Overflow posts, or GitHub issues regarding similar problems, and none of the "fixes" i found has helped me. Anyone knows a solution?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
DaveK
  • 1
  • Can you update your question with your you defined your model? – Djinn Jul 22 '22 at 14:25
  • Not sure i know what you mean, everything related to the model is in the code, its taken directly from https://keras.io/api/applications/vgg/ for the other models that are commented in the code its the same, also from Keras @Djinn – DaveK Jul 22 '22 at 14:45
  • You still have to load the model. Did you do any transfer learning/fine tuning? Your issue is a fine tuning problem. It helps to see exactly how you did that, or else it can be anything. – Djinn Jul 22 '22 at 15:19
  • Only thing not included in the code is imports, otherwise this is all there is. If its imports you want too, i can include those – DaveK Jul 22 '22 at 15:25
  • Is there a reason you're doing augmentation on your train data only? And not also validation data? And why are you using `class_mode="sparse"` instead of "binary" or "categorical"? "sparce" doesn't give labels `y`, just input `x`. – Djinn Jul 22 '22 at 19:00
  • Sometimes augmenting validation data can help the model, but I know some theories say to keep validation untouched. – Djinn Jul 22 '22 at 19:13
  • I did use augmentation on validation too before, it made no difference, or at least no difference that was noticable, plus i didnt think augmenting validation was necessary or even a good idea. For the class mode, if i understood it right i cannot use binary because i have more than two classes, and i suppose i can try categorical, but from what i gathered class_mode should not have any impact on success rate @Djinn and also "sparse" does return labels, just in the form of integers – DaveK Jul 22 '22 at 19:15
  • Try with categorical. Usually that's used with softmax. [According to this stack question, `sparse` in practice functions just like `binary`.](https://stackoverflow.com/questions/59439128/what-does-class-mode-parameter-in-keras-image-gen-flow-from-directory-signify) – Djinn Jul 22 '22 at 20:02
  • Seems like i need to change more than just the class mode to change it to categorical, so whenever i figure out what all to change i will post if it helped the accuracy – DaveK Jul 22 '22 at 20:26
  • To change to categorical i had to remove the starting weights (so set it to `weights=None` instead of `weights="imagenet"`) to even start it running, and all it did was have it stuck on 23-28% instead of 60-68 as before, [graph](https://imgur.com/a/eYAaNUN) with results – DaveK Jul 22 '22 at 21:48
  • You definitely need those starting weights. Maybe you need to add layers? – Djinn Jul 22 '22 at 21:52
  • Doubtful about adding layers, as in this [GitHub archive](https://github.com/dumitrux/architectural-style-recognition/blob/master/src/architectural-style-recognition.ipynb) they have trained on ResNet50 as i did, and their validation loss is magnitudes better than mine, and only difference in their code i see is they are using FastAI instead of Keras and Tensorflow – DaveK Jul 22 '22 at 21:57

0 Answers0