EDIT: I couldn't find a solution as I wanted below, so I did it as a simple write and read into a .txt file since both apps are in the same physical server. I'm not closing this because I believe it's still something people might need a real solution. Thanks.
Firstly I'm sorry because I'm not sure how this is called so it's been hard to google search it. A synopsis of my problem is this:
I'm using ageitgey's facial_recognition python library to recognize faces in a video. Refer to this code. So, you see it uses opencv to capture every frame inside a while True:
and ret, frame = video_capture.read()
for the frame.
For every iteration I'll fill a variable (let's name it RETURN_CODE
) to 0 if no faces are inside the frame, 1 if the face is not recognized and 2 if the face is recognized.
What I need is that for every iteration I return this code without breaking the loop so that another application keeps checking this status and do other things based on it's value.
I'm still figuring out how to flask this, but this is not part of this question.
Currently I'm printing the output and I read that I might get it using another script with stdout, but it seems wrong to flood the console. Writing into a file might crash if app1 tries to write while app2 has it opened.
Here is my sample code, modified version from the above link: notes: for it not crash it has to add 2 images at the same directory as the script, "obama.jpg" and "biden.jpg" from this repo: https://github.com/ageitgey/face_recognition/tree/master/examples
import face_recognition
from imutils.video import VideoStream
import imutils
import cv2
import numpy as np
import time
# our variable
RETURN_CODE = 0
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Barack Obama",
"Joe Biden"
]
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
# start capturing frame by frame
## changed for imutils as is much better and opencv crashes a lot
video_capture = VideoStream(src=0).start()
TEST_START = time.time()
while True:
# Grab a single frame of video
frame = video_capture.read()
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = imutils.resize(frame, width=450)
# THis will resize the frame on screen
r = frame.shape[1] / float(small_frame.shape[1])
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]
# Only process every other frame of video to save time
if process_this_frame:
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# # If a match was found in known_face_encodings, just use the first one.
# if True in matches:
# first_match_index = matches.index(True)
# name = known_face_names[first_match_index]
# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
face_names.append(name)
if name == 'Unkown':
RETURN_CODE = 1
else:
RETURN_CODE = 2
process_this_frame = not process_this_frame
# Display the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top = int(top * r)
right = int(right * r)
bottom = int(bottom * r)
left = int(left * r)
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
#Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Currently it's printing the code, later add into a flask
print(RETURN_CODE)
#yield RETURN_CODE
if time.time() - TEST_START >= 10.0:
break
# Release handle to the webcam
video_capture.stream.release()
video_capture.stop()
cv2.destroyAllWindows()