2

I need it to detect eyes (separately, both open or closed), crop them and save them as images. It works but not in every photo.

I tried everything I could think of. I tried different values for scaleFactor and minNeighbors, also tried to add min and max size for the eyes detected (did not make much difference).

I still get issues. It sometimes detects more than 2 eyes, sometimes only 1. Sometimes it mistakes even nostrils for eyes :D . Especially if the eyes are closed, the errors are very often.

What can I do to improve accuracy? This is very important for the rest of my program.

  face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
  eyes_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml')

  faces_detected = face_cascade.detectMultiScale(img, scaleFactor=1.1, minNeighbors=5)

  (x, y, w, h) = faces_detected[0]
  cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 1);

  eyes = eyes_cascade.detectMultiScale(img[y:y + h, x:x + w], scaleFactor=1.1, minNeighbors=5)
  count = 1
  for (ex, ey, ew, eh) in eyes:
      cv2.rectangle(img, (x + ex, y + ey), (x + ex + ew, y + ey + eh), (255, 255, 255), 1)
      crop_img = img[y + ey:y + ey + eh, x + ex:x + ex + ew]
      s1 = 'Images/{}.jpg'.format(count)
      count = count + 1
      cv2.imwrite(s1, crop_img)
RandUs
  • 21
  • 2
  • Please provide your input and output results so that others can test your code with those images. – fmw42 Nov 09 '19 at 22:42
  • What do you mean? My input is a photo taken through the webcam - it is different every time. The output is this same photo but with a detected face and eyes (opencv draws a rectangular around them). Then I have code that crops the image - save the eyes as separate photos. The issue is that opencv sometimes detects other facial features as eyes or not detect an eye at all. – RandUs Nov 10 '19 at 17:32
  • I am saying that it would help if you showed us an example input image so people can test your code and try to suggest improvements. – fmw42 Nov 10 '19 at 19:33
  • Ok, here is an article that shows what I'm doing: https://medium.com/yottabytes/a-quick-guide-on-preprocessing-facial-images-for-neural-networks-using-opencv-in-python-47ee3438abd4 Even in this article, they show some issues with OpenCV - scrow down to the picture of Gollum that has like 4 eyes detected instead of 2. I am trying to find a way to achieve better accuracy, that is all. – RandUs Nov 10 '19 at 20:13

1 Answers1

2

For face detection stuff, my go-to would be dlib (Python API). It is more involved and slower but it results in much higher quality results.

Step 1 is converting from OpenCV to dlib:

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

Next, you can use the dlib face detector to detect the faces (second argument means to upsample by 1x):

detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

Then find facial landmarks using a pre-trained 68 point predictor:

sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

Note: From here you could get face chips dlib.get_face_chip(img, faces[0])

Now you can get bounding boxes and the locations of the eyes:

bb = faces[0].rect

right_eye = [faces[0].part(i) for i in range(36, 42)]
left_eye = [faces[0].part(i) for i in range(42, 48)]

Here are all the mappings according to pyimagesearch:

mouth: 48 - 68
right_eyebrow: 17 - 22
left_eyebrow: 22 - 27
right_eye: 36 - 42
left_eye: 42 - 48
nose: 27 - 35
jaw: 0 - 17

Here's the results and the code I put together: Example 1 Example 2

import dlib
import cv2

# Load image
img = cv2.imread("monalisa.jpg")

# Convert to dlib
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# dlib face detection
detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

# Find landmarks
sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

# Bounding box and eyes
bb = [i.rect for i in faces]
bb = [((i.left(), i.top()),
       (i.right(), i.bottom())) for i in bb]                            # Convert out of dlib format

right_eyes = [[face.part(i) for i in range(36, 42)] for face in faces]
right_eyes = [[(i.x, i.y) for i in eye] for eye in right_eyes]          # Convert out of dlib format

left_eyes = [[face.part(i) for i in range(42, 48)] for face in faces]
left_eyes = [[(i.x, i.y) for i in eye] for eye in left_eyes]            # Convert out of dlib format

# Display
imgd = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)             # Convert back to OpenCV
for i in bb:
    cv2.rectangle(imgd, i[0], i[1], (255, 0, 0), 5)     # Bounding box

for eye in right_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 0, 255), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 255, 0), -1)

for eye in left_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 255, 0), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 0, 255), -1)

cv2.imwrite("output.jpg", imgd)

cv2.imshow("output", imgd)
cv2.waitKey(0)
Alex
  • 963
  • 9
  • 11