0

I am trying to use the haar-cascade in OpenCV 4.0 to detect faces for emotion, gender & age estimation. sometimes the detectmultiscale() function returns an empty tuple which raises an error in the later parts of recognition.

I tried creating a while loop until the face is detected, but it seems once the face is not detected it is not being detected again(in the same captured frame), I get empty tuples returned. the weird thing is that sometimes the program works flawlessly. the detection model is being loaded correctly, since cv2.CascadeClassifier.empty(face_cascade) returns False.

there seems to be no problem with the captured frame since I can display it properly.

after searching I found that detectmultiscale() does, in fact, return an empty tuple when no faces are detected.

Python OpenCV face detection code sometimes raises `'tuple' object has no attribute 'shape'`

face_cascade = cv2.CascadeClassifier(
        'C:\\Users\\kj\\Desktop\\jeffery 1\\trained_models\\detection_models\\haarcascade_frontalface_alt.xml')
 retval = cv2.CascadeClassifier.empty(face_cascade)
 print(retval)

returns False

def video_cap(out_queue):
        video_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
        #video_capture.set(3, 768)
        #video_capture.set(4, 1024)
        while True:
                ret, bgr_image = video_capture.read()
                cv2.imshow('frame',bgr_image)
                cv2.waitKey(1000)
                cv2.destroyAllWindows()
                if video_capture.isOpened() == False :
                    video_capture.open(0)

                if(ret):
                    gray_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2GRAY)  
                    rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)  
                    faces = detect_faces(face_detection, gray_image)
                    ret_list = [gray_image, rgb_image, faces]
                    print("DEBUG: VIDEO_CAPTURE MODULE WORKING")
                    out_queue.put(ret_list)
                    return

video_cap function is threaded

def detect_faces(detection_model, gray_image_array):
    faces1 = detection_model.detectMultiScale(gray_image_array, scaleFactor= 2, minNeighbors=10,minSize=(64,64))
    while(len(faces1)== 0 ):
        faces1 = detection_model.detectMultiScale(gray_image_array, scaleFactor=2, minNeighbors=10, minSize=(64, 64))
        print(faces1)
        if(len(faces1)!=0):
            break
    return faces1

I get the output: () () () ()....

goes on until I terminate.

how do I fix the problem?

harikj5
  • 1
  • 1
  • 2

2 Answers2

0

This is a snippet of the code I used. I removed the ARGUMENTS in the detectMultiScale() function and it ran fine.

Also, make sure you have the correct path to the xml files.

classifier = cv2.CascadeClassifier("../../../l-admin/anaconda3/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml")
img = cv2.imread('../Tolulope/Adetula Tolulope (2).jpg')
face = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = classifier.detectMultiScale(face)
print(type(faces), faces)
for (x, y, w, h) in faces:
  img = cv2.imwrite("facesa.png", cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 3))

On a Secondary note, the reason my own did work might be because my camera did locate my face due to the lightning. So I suggest you try it out with a picture first before using the video.

TOLULOPE ADETULA
  • 768
  • 1
  • 12
  • 27
  • try running a simple face detection code, and print the faces variable real time. you will see that it returns an empty tuple sometimes. anyways I switched to opencv-dnn, even though it is slower, it is more accurate – harikj5 Apr 03 '19 at 18:13
0

I have a similar issue when I use jpg format but the main problem is always in the format of the image as when i used png it automatically give the tuple with correct values.

classifier = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")


# reading the image
img = cv2.imread('i.png')

# showing the image
#cv2.imshow('shaswat face detection ',img)


# making image to gray scale as black and white
grayscaled_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# cv2.imshow('shaswat face detection ',grayscaled_img)

# detecting the image
# return top left and bottom right points
faces = classifier.detectMultiScale(grayscaled_img)

print(faces)
#cv2.rectangle(img , face_coordinates[0] , face_coordinates[1] , (255,0,0) , 10)

the output shows [[ 87 114 361 361]]