1

I have implemented the standard code from readthedocs on imageai custom prediction, using my own images with two classes - roughly 700 images each in train and 150 in test, and have gotten a model named "model_ex-077_acc-0.941176.h5".

Does this mean my model is 94.1% accurate on the test data?

I'm asking because when I try to predict my training (or test) data, again using the standard code, the model always predicts the one class with 100% probability and I can't figure out why if my model is 94.1% accurate.

The standard code I implemented (in a virtual environment with tensorflow==2.4.0 and imageai==2.1.6 and all the dependencies) to train is:

from imageai.Classification.Custom import ClassificationModelTrainer
import os


dir_path = os.path.dirname(os.path.realpath(__file__))


model_trainer = ClassificationModelTrainer()
model_trainer.setModelTypeAsInceptionV3()
model_trainer.setDataDirectory(dir_path+"/idenprof")
model_trainer.trainModel(num_objects=2, num_experiments=100, enhance_data=True, batch_size=32, show_network_summary=True)

and to predict

from imageai.Classification.Custom import CustomImageClassification
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
imageList = os.listdir(dir_path+"/idenprof/test/circle/")

prediction = CustomImageClassification()
prediction.setModelTypeAsResNet50()
prediction.setModelPath("model_ex-077_acc-0.941176.h5")
prediction.setJsonPath("model_class.json")
prediction.loadModel(num_objects=2)

for i in imageList:
    predictions, probabilities = prediction.classifyImage("idenprof/test/circle/"+i)

    for eachPrediction, eachProbability in zip(predictions, probabilities):
        print(eachPrediction , " : " , eachProbability)

where my two classes are "brick" and "circle" and the output is always "brick" with 100% probability.

bengelha
  • 11
  • 2
  • Please provide some code with your post. It is allways beneficial to provide an [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Without any more context (and me not being exeperienced with `imageai` in particular): I am guessing that this means that your model is `94.1%` accurate in the test set. I think you are confusing the performance in the train set with the performance in test set. – Björn Jul 23 '21 at 11:23
  • No problem. You technically need the 2000ish images to replicate. – bengelha Jul 23 '21 at 12:00

0 Answers0