I have a small problem I can't quite solve and it's been this way for almost a week now. I have an Azure Kinect DK camera, from which I take a live video feed of a space.
On that space I will have an object, and I need to track an area around the object to detect any people that get within that area. I have it set up so I subtract all the background from the video feed except the area I want, and feed that to the Azure Kinect Body Tracker.
The problem is that it takes all the video feed, so it detects everyone the Kinect sees, and not just the ones in the area like I want.
Any help or tip is appreciated, and thanks in advance.
Edit added to provide a mre as requested:
area_pts_2 = np.array([[340, 200], [1180, 200], [1070, 920], [650, 920]])
# Segunda área de detección
imAux_2 = np.zeros(shape=(frame.shape[:2]), dtype=np.uint8)
imAux_2 = cv2.drawContours(imAux_2, [area_pts_2], -1, (255), -1)
# Área de detección
image_area_2 = cv2.bitwise_and(gray, gray, mask=imAux_2)
cnts_2, _ = cv2.findContours(image_area_2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in cnts_2:
if cv2.contourArea(cnt) > 1000:
# Pintar cuadrado alrededor de contorno detectado
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
body_frame = bodyTracker.update()
numberOfBodies = pykinect.Frame.get_num_bodies(body_frame)
numBodies = str(numberOfBodies)
combined_image = body_frame.draw_bodies(image_area_2, pykinect.K4A_CALIBRATION_TYPE_COLOR)
So this is the base of what I'm working on. I haven't included the portions of code where I try to crop the image to then feed it to the bodyTracker because they don't work, they make the program crash.