0

I am using Qt 4.8.6 to display multiple radar videos. For now i am getting about 4096 azimuths (360°) per 2.5 seconds and video. I display my image using a class inherited from QGraphicsObject (see here), using one of the RGB-Channels for each video.

Per Azimuth I get the angle and an array of 8192 rangebins and my image has the size of 1024x1024 pixels. I now check for every pixel (i am going through every x-coordinate and check the max y- and min y-coordinate for every azimuth and pixel coordinate), which rangebins are present at that pixel and write the biggest data into my image-array.

My problems

  • The calculating of every azimuth lasts about 1ms, which is way too slow. (I get two azimuths every about 600 microseconds, later there may be even more video channels.)
  • I want to zoom and move my image and for now have thought about two methods to do that:
    • Using an image array in full size and zoom and move the QGraphicsscene directly/"virtual" That would cause the array to have a size of 16384x16384x4 bytes, which is way too big (i can not manage to allocate enough space)
    • Save multiple images for different scalefactors and offsets, but for that I would need my transforming algorithm to calculate multiple times (which is already slow) and causing the zoom and offset to display only after the full 2.5 seconds

Can you think of any better methods to do that? Are there any standard rules, how I can check my algorithm for better performance?

I know that is a very special question, but since my mentor is not at work for the next days, I will take a try here.

Thank you!

Community
  • 1
  • 1
honiahaka10
  • 772
  • 4
  • 9
  • 29

1 Answers1

0

I'm not sure why you are using a QGraphicsScene for the scenario you are doing. Have you considered turning your data into a raster image, and presenting the data as a bitmap?

macetw
  • 1,640
  • 1
  • 17
  • 26