I have implemented the standard code from readthedocs on imageai custom prediction, using my own images with two classes - roughly 700 images each in train and 150 in test, and have gotten a model named "model_ex-077_acc-0.941176.h5".
Does this mean my model is 94.1% accurate on the test data?
I'm asking because when I try to predict my training (or test) data, again using the standard code, the model always predicts the one class with 100% probability and I can't figure out why if my model is 94.1% accurate.
The standard code I implemented (in a virtual environment with tensorflow==2.4.0 and imageai==2.1.6 and all the dependencies) to train is:
from imageai.Classification.Custom import ClassificationModelTrainer
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
model_trainer = ClassificationModelTrainer()
model_trainer.setModelTypeAsInceptionV3()
model_trainer.setDataDirectory(dir_path+"/idenprof")
model_trainer.trainModel(num_objects=2, num_experiments=100, enhance_data=True, batch_size=32, show_network_summary=True)
and to predict
from imageai.Classification.Custom import CustomImageClassification
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
imageList = os.listdir(dir_path+"/idenprof/test/circle/")
prediction = CustomImageClassification()
prediction.setModelTypeAsResNet50()
prediction.setModelPath("model_ex-077_acc-0.941176.h5")
prediction.setJsonPath("model_class.json")
prediction.loadModel(num_objects=2)
for i in imageList:
predictions, probabilities = prediction.classifyImage("idenprof/test/circle/"+i)
for eachPrediction, eachProbability in zip(predictions, probabilities):
print(eachPrediction , " : " , eachProbability)
where my two classes are "brick" and "circle" and the output is always "brick" with 100% probability.