0

Good day everyone,

I am having trouble picking a strategy for my problem using python. I have two camera's (might be more in the future) connected to my local network and want to get both streams, do some image processing on them and in the future stream them to a local http server.

My first question is, reading frames from the camera, is that mostly IO intensive or CPU intensive.

Secondly i am wondering if/when I pick the multiprocessing route, how I should implement it. First i've had this "Three Layer structure" idea where getting the frames is done in the "DAL" layer by one processor, the processing done by another and maybe in the future a third processor handles all the http stuff. ( The http server is for another time).

But after doing some research i'm not really sure this is the right way to go? Maybe one processor should handle all the IO gathering from the camera, do the processing and stream it to the http server.

Somebody with more experience than me who can give me some insights?

I've had some experience with python and opencv not using any of the multithreading/processing libraries cause it was more proof of concept for a thesis.

Thanks for reading this brainstorm

specs of the camera are: 1080x720 resolution framerate 160 fps using the GigE Vision protocol

Bubbel
  • 1

1 Answers1

0

My first question is, reading frames from the camera, is that mostly IO intensive or CPU intensive.

While network I/O is somewhat CPU intense, that processing happens in your operating system kernel. To your application it looks like I/O.

Secondly i am wondering if/when I pick the multiprocessing route, how I should implement it.

There is a lot of fine-tuning that you can do, however I would argue that it is important to stick to the KISS principle and tune your application as required.

From what you describe, I expect your application to be mostly in one of three stages:

  1. Waiting for the next frame or copying it into a numpy array
  2. Calling multiple C functions that implement your OpenCV processing
  3. Waiting for the network output to send the frame

If you don't want to lose frames, you should (almost) always have a dedicated thread do step 1. Otherwise GigE vision will happily drop frames that it cannot buffer.

For step 2 you should first check whether your OpenCV processing is parallelized internally or may even use GPU processing. In this case, there is little or no gain in adding multiprocessing or multithreading on top of it.

Step 3 is similar to step 1, just have a thread always ready to send images in order to keep your network layer busy. Since this step involves TCP, using multiple HTTP connections in parallel might be beneficial. Of course this depends on your receiver.

Your biggest concern as far as parallelization goes is the global interpreter lock (GIL). https://wiki.python.org/moin/GlobalInterpreterLock However, note from that particular site:

In hindsight, the GIL is not ideal, since it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Luckily, many potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.

In other words, if all you do is calling a few long-running C functions, either your network I/O or your OpenCV processing, you can use multithreading. Otherwise you may need multiprocessing.

Note that multiprocessing incurs more costs to the processing as a whole because you need to copy your frames from one application to another.

In conclusion, here is how I would set it up:

  1. One thread per camera that does nothing but acquiring images
  2. One thread or a thread pool doing the image processing (again, if OpenCV parallelizes internally, don't put a thread pool on top of it)
  3. One thread sending frames to your http server

All stages should be connected with queue.Queues or queue.SimpleQueues. These should be size-limited to avoid exhausting memory if processing stalls.

If you find that you actually need multiprocessing, you can replace the queues with multiprocessing.Queues and all threads with dedicated multiprocessing.Processes. Or the thread in stage 1 can simply call multiprocessing.Pool.apply_async for each incoming frame and pass the AsyncResult object via a Queue to stage 3.

Homer512
  • 9,144
  • 2
  • 8
  • 25
  • Thanks for the quick and amazing response. I will try to implement your strategy and see where it takes me. Very happy with this respone on my first post on stackoverflow. Do you have any tips on where to find a lot of information about the GigE Vision protocol? – Bubbel Nov 28 '21 at 16:44