3

I used NVIDIA DIGITS to train an image classifier. The training images are in JPG format. DIGITS uses Pillow to load images.

I noticed that, at inference time, there are relevant differences in accuracy if the JPG images are loaded with OpenCV instead of Pillow. I think that this is related to the different JPEG decoding. Now, since my classifier is going to be used on a live app where images are captured with openCV from a webcam, I would like to know if this difference could be a problem and, if so, how can I overcome it.

EDIT: I know that Pillow uses the RGB format and OpenCV uses the BGR format but the difference is not related to this because I convert the images before making the comparison. The network is trained on BGR images.

firion
  • 296
  • 3
  • 12
  • 1
    Pillow loads images in RGB, OpenCV in BGR. This can be easily converted like https://stackoverflow.com/questions/14134892/convert-image-from-pil-to-opencv-format – Mick Oct 26 '18 at 13:27
  • I know, I implied that I made the conversion before comparing the images. I'm going to specify it – firion Oct 26 '18 at 13:40

0 Answers0