I am using Python 3.7 and ran a simple face recognition program with OpenCV and Haar cascades.
I noticed the recognition is much better when I am farther from the camera (one arm length) than when my face takes up most of the screen (6 inches away from camera). Why is it better at recognizing smaller faces?
I read this article on non-face regions and the parameters you can pass in to faceCascade.detectMultiScale
function.
In an image, most of the image is a non-face region. Giving equal importance to each region of the image makes no sense, since we should mainly focus on the regions that are most likely to contain a picture. Viola and Jones achieved an increased detection rate while reducing computation time using Cascading Classifiers.
The key idea is to reject sub-windows that do not contain faces while identifying regions that do.
Parameters:
scaleFactor : Parameter specifying how much the image size is reduced at each image scale.
minNeighbors : Parameter specifying how many neighbors each candidate rectangle should have to retain it.
minSize : Minimum possible object size. Objects smaller than that are ignored.
maxSize : Maximum possible object size. Objects larger than that are ignored.
I referred to this SO post on the recommended parameter values, which I am already using. Is there anything else I can try to get around this? I want to be able to recognize the face even when I am close to the screen.