2

To begin with, I am given Allied Vision Camera and with the help of Vimba SDK Python, I am streaming. FPS for streaming is around 12-14, whereas the maximum FPS offered by Manta G-201C is 30. How to reach to maximum FPS?

First of all, with the help of Vimba Viewer App, I am setting the necessary parameters like exposure time, white balance, gain etc. and the save the xml file. Now, with the help of xml file, I feed the necessary values to Vimba-Python as base settings. The following snippet of the code is shown below:

import cv2
import time
from vimba import *
import os
from datetime import datetime

is_running = True
i = 1
count = 0
path = 'C:\\Vimba\\dataset\\'

def do_something(img):
    lst = []
    global count
    count += 1
    filename = 'IMG_' + str(count) + '.jpg'
    cv2.putText(img, str(datetime.now()), (20, 40), cv2.FONT_HERSHEY_PLAIN, 2, (255, 255, 255), 2, cv2.LINE_AA)
    cv2.imwrite(filename, img)
    lst.append(img)
    return len(lst)


with Vimba.get_instance() as vimba:
    with vimba.get_all_cameras()[0] as cam:
        os.chdir(path)
        cam.set_pixel_format(PixelFormat.BayerRG8)
        cam.ExposureTimeAbs.set(30000.000000)
        cam.BalanceWhiteAuto.set('Off')
        cam.Gain.set(0)
        cam.AcquisitionMode.set('Continuous')
        cam.GainAuto.set('Off')
        cam.Height.set(720)
        cam.Width.set(1280)
        while is_running:
            start = time.time()
            frame = cam.get_frame()
            frame = frame.as_numpy_ndarray()
            frame = cv2.cvtColor(frame, cv2.COLOR_BAYER_RG2RGB)
            cv2.imshow('Live feed', frame)
            result = do_something(frame)
            end = time.time()
            seconds = end - start
            fps = int(result/seconds)
            print('FPS:', fps)
            # print(cam.AcquisitionFrameRateAbs.get())
            key = cv2.waitKey(1)
            if key == ord('q'):
                break
        cv2.destroyAllWindows()

As seen from the above code, I am reading all the values for camera settings (from xml file) and then created a function 'do_something(img)' that will save the images at the desired path and returns a value for total number of images stored in a list. That value is used to check for FPS. The following FPS I am getting after running the whole code:

FPS: 8
FPS: 13
FPS: 13
FPS: 14
FPS: 13
FPS: 13
FPS: 13
FPS: 12
FPS: 13
FPS: 13
FPS: 12
FPS: 14
FPS: 14
FPS: 13
FPS: 14
FPS: 14
FPS: 14
FPS: 14
FPS: 14

Whereas with the help of cam.AcquisitionFrameRateAbs.get(), it is showing me FPS of 30.00030000300003. I have tried couple of things from the internet, but nothing worked for me. I want to reach to 30 FPS from 14 and don't know how to do it. Any help is appreciated!

By the way, I am using ASUS F570Z laptop that has a graphic card NVIDIA GeForce GTX 1050 (4 GB), processor - AMD Ryzen 5 Quad Core 2500U, RAM - 8 GB DDR4 and Windows 10 64-bit.

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
  • 1
    You'll need to profile your code, to see how much time each statement takes. In reality the FPS is even lower, since you're not measuring the whole loop body (and `cv2.waitKey` will definitely waste some time as well, since it **waits**). | I'd definitely run the image acquisition loop in a separate thread. – Dan Mašek Jan 04 '23 at 15:10
  • In VimbaViewer have a look at the "device link throughput limit" parameter, which is by default often only 50% of the device link speed. – Micka Jan 04 '23 at 16:38
  • Also please tell about the resolution, the ethernet controller bandwidth, the color and pixel fornat that you use. Just because the sensor can reach 30 fps does not mean that it can transport that data for higest resolution and highest per pixel #bytes – Micka Jan 04 '23 at 16:43
  • Try to exclude imshow and cvtColor from the loop and measure frameCounter/overallTime which should give some more insights. – Micka Jan 04 '23 at 16:50
  • And first should be to measure the fps in VimbaViewer. At the bottom bar it will show the rendering fps and the capturing fps separately. If 30 fps capturing is achieved, the settings and hardware setup are fine and you can focus on optimizing your own pipeline. – Micka Jan 04 '23 at 18:58
  • I have already checked the FPS in Vimba viewer. The rendering fps was 30 and capturing fps was also 30. @Micka – Hardik_Zalavadiya Jan 04 '23 at 19:16
  • So, should I create another function that display the images and feed it as a thread inside the loop or outside the while loop? @DanMašek – Hardik_Zalavadiya Jan 04 '23 at 19:17
  • 1
    If you use the same settings (e.g. save xml in VimbaViewer and load it in your app) then try capturing in python without rendering and measure fps over longer time (counter + overall time). I dont know how well the python vimba works at all. There could be another bottleneck. – Micka Jan 04 '23 at 19:20
  • Could you, please, [edit] your question, ditch the XML and just hardcode those 6 parameters (just like you did with height and width)? | I guess you have the 30fps variant of G-201C? – Dan Mašek Jan 04 '23 at 19:23
  • 1
    Same settings in the sense that I used BayerRG8 pixel format, the exposure time was 30000 micro seconds, auto gain, auto white balance are off and the saturation value was 1.90. The resolution of the image was 1280 * 720. @Micka – Hardik_Zalavadiya Jan 04 '23 at 19:26
  • If your processing takes more than the frame time, then you cannot keep up. You will necessarily be dropping frames somewhere. – Tim Roberts Jan 04 '23 at 19:37
  • I have updated the question. Can you please explain me more about how should I go with threading? Should I use it within the while loop or outside the loop? @DanMašek – Hardik_Zalavadiya Jan 04 '23 at 19:38
  • Then how to overcome this situation? @TimRoberts – Hardik_Zalavadiya Jan 04 '23 at 19:42
  • A big part of the loop (up to the `cvtColor`, you can probably afford to do that, but again, measure) would go into a thread. Feed the captured (and optionally converted) frames into a `Queue`. Some other thread (possibly the main one) would then read from that Queue, and do further processing asynchronously to the act of reading from the camera. – Dan Mašek Jan 04 '23 at 19:54
  • 1
    First test could be to remove the "do_something(frame)" line. You should see an impact on the fps. – Micka Jan 04 '23 at 19:57
  • If your processing takes 65ms, then you can only handle 15 frames per second. That's a simple and irrefutable fact, regardless of what the camera can serve. You would have to find a different processing algorithm. – Tim Roberts Jan 04 '23 at 21:02
  • 3
    OK, I finally found my old Manta G-033C and ran it under line profiler, and the biggest bottleneck is the `cam.get_frame()` (even with lower resolution than yours). It seemed to take about 50ms per call -- this synchronous (in terms of Vimba) approach just doesn't cut it for high framerates. You need to switch to the asynchronous/streaming mode, where you define a callback to handle captured frames. See `asynchronous_grab_opencv.py` and `multithreading_opencv.py` examples that ship with Vimba. | Now my main thread goes at 60 fps, capturing at full 88 fps, so the queue fills up fast. – Dan Mašek Jan 05 '23 at 00:45
  • The problem with asynchronous_grab_opencv.py and multithreading_opencv.py is that it uses image as as_opencv_image(). Now with this, I am having problem of pixel format because my camera (Manta G-201C) supports four pixel formats i.e. PixelFormat.Mono8, PixelFormat.BayerRG8, PixelFormat.BayerRG12 and PixelFormat.BayerRG12Packed. I want to use pixel format BayerRG8, but when I run the asynchronous_grab_opencv.py file, it shows me error that 'Current Format \'{}\' is not in OPENCV_PIXEL_FORMATS'. @DanMašek – Hardik_Zalavadiya Jan 05 '23 at 08:10
  • I created thread to save the images which I am getting from get_frame(). Though using thread, I am getting same FPS (no change). Yes, get_frame() is a bottleneck here. But now the question is how should I move further, as I am not much aware with the use of multi-threading. @DanMašek – Hardik_Zalavadiya Jan 05 '23 at 09:56
  • 2
    Have a look at this: https://pastebin.com/juZq6UvS | This already takes advantages of multithreading, but that's internal to the Vimba library -- it invokes the callback from a different thread. In the callback I create a numpy array and the do color conversion (this creates a new object, so no further copies are needed). It then feeds it to a queue. The main thread then pulls frames out from the queue, displays them and does your `do_something` (adds text and saves to JPEG). | Now I have to go do some real work, I'll se if I can write up an answer when I have time. – Dan Mašek Jan 05 '23 at 21:21
  • 1
    The code in the link is working perfectly, I am getting FPS of 33, but it is not saving the image. @DanMašek – Hardik_Zalavadiya Jan 06 '23 at 10:27
  • Is it possible to control/fix FPS at a constant rate? @DanMašek – SmitShah_19 Jan 06 '23 at 12:16
  • @Hardik_Zalavadiya Regarding now saving the image, Look at (and adjust as necessary) the `filename` in `do_something`. I didn't want it creating a mess in current working directory (or C drive), so I made it write into a subdirectory named `data` (but forgot to add code to automatically create it). When that directory doesn't exist, `imwrite` will fail (but not throw an exception). – Dan Mašek Jan 06 '23 at 14:34
  • @SmitShah_19 Yes, but post a proper new question about it (and ping me in a comment there), this is getting too chatty. Meanwhile read AVT's [GigE Features Reference](https://cdn.alliedvision.com/fileadmin/content/documents/products/cameras/various/features/GigE_Features_Reference.pdf), pages 16-22, keywords `TriggerSource`,`FixedRate`,`AcquisitionFrameRateAbs`. – Dan Mašek Jan 06 '23 at 14:51

1 Answers1

0

As stated in the comments you need asynchronous acquisition: If you want to use multithreading and the frame handling mechanism of numpy array to opencv format for the frame, then you can modify line 62 in multithreading_opencv.py cv_frame = frame.as_opencv_image() and replace it with your frame handling. You can of course do the same with the other python scripts as well and disable the check for pixel format. In asynchronous_grab_opencv.py for example by replacing the whole code block 117-132 with just setting the pixel format.

You can also first get the frame, use Vimba image transform to change to BGR8 pixel format and then use as opencv frame:

frame.convert_pixel_format(PixelFormat.Bgr8)

opencv_frame = frame.as_opencv_image()

Both pixel format solutions are essentially doing the same. I'm not sure which would be faster, but just using asynchronous acquisition will already lead to higher FPS.

T F
  • 56
  • 4
  • I tried your method of removing all the lines 117-132 in asynchronous_grab_opencv.py and replace it with cam.set_pixel_format(PixelFormat.BayerRG8). Now, in the Handler class, when I show the image, I put the window name as frame.get_pixel_format() and there I am getting BayerRG8, but it displays the image with PixelFormat.Mono8. – Hardik_Zalavadiya Jan 05 '23 at 14:13
  • Did you convert the pixel format into BGR before displaying it? I'm assuming you're using as_opencv_image(), but Vimba python doesn't have an opencv compatible bayer format programmed. So (without checking the source code) I think it is automatically converting the image into a compatible mono8. – T F Jan 06 '23 at 16:14