I've setup a camera in a squash club and want it to tell me if the squash court is occupied or empty. I trained it with a few hundred images of occupied and empty courts and the results are good.
Now the catch is sometimes the club closes early and the lights get turned off. So I basically have almost black images. I tried adding a few of these images to my "empty" squash court training set. I re-ran the image training but the new model does not predict these dark images as empty. It thinks they are occupied.
I next tried creating a new class called "court_closed". I put five of these dark images there and re-trained. Now the model thinks dark images are "empty". That is technically an improvement over thinking they are occupied. But why is it not predicting them as "court_closed"? Do I need to add hundreds of nearly identical dark/black images?
Here's an example image: