I have been tasked with simulating high speed thermal imaging, the background of this is that FLIR thermal camera operates at 9fps. The idea here is that the color camera which is already operating at a high speed (30 or 60 fps) can be overlaid with the thermal image with weighting given to each image (example: x for thermal pixel and (1-x) for color image pixel and then summing the two) - in other words, the color images would use the thermal image as an intensity map, if that makes sense.
For the sake of this discussion, assume that
- There is no perspective correction needed between the two cameras
- The thermal image and the color camera image are of the same size
- We are only dealing with grayscale images from the two cameras, so the calculations are all in grayscale. The resulting grayscale image would be colormapped for final display, just like any typical thermal image.
However, I am unclear about the whole thing, and several questions pop as I think about it. AFAIK, what I am really looking for is a frame rate booster, but for thermal imaging. Like a predictor/interpolator. Agree? Overlaying or Intensity mapping is not applicable for video content with varying frame rates.
Does this idea make sense to anyone? Are there any known solutions/papers/implementations?
Thanks a lot!