1

I am looking into image processing using an SJ4000 camera, linked up via USB to a Raspberry Pi (running Raspbian Jessie) for image processing with OpenCV in Python. I have achieved quite a bit using my webcam but now need to port it into the SJ4000's environment, however I am stuck at this hurdle.

The code I've used is identical to the answer to this question: rotated face detection.

On my laptop's webcam, I get a reasonably good framerate. When the SJ4000 is connected to my laptop via USB as well, I get a good framerate. However, on the Raspberry Pi, when I execute the same code, the image is just frozen for some reason. I then need to force quit the video viewer window which shows up as it's simply frozen.

EDIT 1: After closing the Spyder IDE and loading it up again a few times, and executing the same code, I can see a feed, but the framerate is very low (2-3 seconds per frame) and it will just freeze after some time.

EDIT 2: I've done further testing and find that when I include the face detection code, it takes a long time for the feed to be displayed as there is a TEN second delay. When I forward the feed live without any processing, it's very responsive.

How should I get around this? Is the only way getting a more powerful processor?

Thanks for any help!

Community
  • 1
  • 1
Jack Paul
  • 219
  • 2
  • 10
  • first try video feed without processing.... btw what resolution color depth/encoding and bitrate are you forcing to use? I do not use Raspberry pi but my bet the USB bandwith is limited by the CPU (no DMA). If the feed works well try to estimate how much computation power is left .... you can not exceed it otherwise it will create bottleneck causing major slowdowns and freezes. so if you got some fps then `T=1/fps` is your time slot. measu how much time it actualy takes from CPU ... so if 10% then you got less then 90% of `T` and must fit in ... – Spektre Dec 04 '16 at 09:32
  • If you choosed bad resolution/encoding then it can take even 95% of T just to grab the image from camera ... you have to fit inside bandwidth too – Spektre Dec 04 '16 at 09:33
  • @Spektre - I have tried it without processing, and the feed is very smooth. Is there a way to limit the number of frames I process per second? – Jack Paul Dec 04 '16 at 13:37
  • optimize what you can ... crop out unnecessary parts of the feed before processing to ease up the detection, lower resolution if can etc ... first you need to detect what exactly is the bottle neck ... (try to rem out parts of the detection code step by step and measure the times ... they actualy took to compute (fps is not enough as it is taighted to many other things then just raw CPU power). Also Python does not sound like a performance boost. – Spektre Dec 04 '16 at 13:41

1 Answers1

0

Like others said, face detection is very computationally expensive using HOG/Haar descriptors. You won't be able to do real time face detection on the Raspberry Pi. On my Raspberry Pi 3, I can do human body detection on a 300x300 image at around 5 fps.

What I recommend is: Do motion detection. When motion is detected, start face detection.

Further optimization can be done by running face detection in its own thread, and have motion detection feed a FIFO of frames to be analyzed by face detector if motion is detected in a frame. That way, your face detector can operate asynchronously, and not hold up the main thread capturing the video frames, and doing motion detection.