2

I'm training to make human detection with YOLOv4 in the custom dataset. I used this command to train the dataset:

!./darknet detector train data/obj.data cfg/custom-yolov4-detector.cfg yolov4.conv.137 -dont_show -map

At the end of the training, it gives this chart:

enter image description here

Validation gives 97% accuracy at most. But when I observe the test data, it gives 80% accuracy approximately in video recording. Is it overfitting? How can I solve this problem? I think that the accuracy should grow up increasingly in the chart.

John Amell
  • 93
  • 1
  • 8

1 Answers1

0

This is not overfitting, it’s not a surprise that the accuracy on the test set is lower than on the validation set. The model has learned on the train set which normally is closer to the validation set and also sometimes the model is being fine tuned while using the validation set, so it’s expected that the model will perform better on the validation set (true for every ML model).

Overfitting occurs when training if the accuracy on the train set keeps increasing while the accuracy on the validation is decreasing between epocs. (the overfitting refers to the train set).

Dr.Haimovitz
  • 1,568
  • 12
  • 16
  • In the chart, the red line is validation accuracy. Some points it decreases. Is it normal? – John Amell Apr 12 '21 at 09:10
  • Yes, the model learns how to generalize to any data (test set) so it is normal if for some batches of input it will perform worse in order to improve for test. – Dr.Haimovitz Apr 12 '21 at 09:13
  • BTW, this is not an accuracy graph but the model’s loss function which is different. – Dr.Haimovitz Apr 12 '21 at 09:15
  • This is a training chart. I know that it shows mAP and loss. In the sources I read, it was written that the validation mAP should increase and the loss function should decrease. There are ups and downs in the train I made. – John Amell Apr 12 '21 at 09:19
  • It looks normal – Dr.Haimovitz Apr 12 '21 at 09:23