I used NVIDIA DIGITS to train an image classifier. The training images are in JPG format. DIGITS uses Pillow to load images.
I noticed that, at inference time, there are relevant differences in accuracy if the JPG images are loaded with OpenCV instead of Pillow. I think that this is related to the different JPEG decoding. Now, since my classifier is going to be used on a live app where images are captured with openCV from a webcam, I would like to know if this difference could be a problem and, if so, how can I overcome it.
EDIT: I know that Pillow uses the RGB format and OpenCV uses the BGR format but the difference is not related to this because I convert the images before making the comparison. The network is trained on BGR images.