So I am working on a project where a person has to hide a target from a camera when some light shows up, the program must recognize that the target has been hidden and I will also be collecting some other sensors data. My problem is that while OpenCV is really useful where I recquire it, when it comes to showing the image in the monitor via imshow(), there is significant latency (60~110 ms) and it can interefere with the data collected. For example, I am using an IMU at 200Hz sampling (5ms between each sample). People in the OpenCV forum says that this command is mostly for debug and that another GUI should be used for the actual thing (http://answers.opencv.org/question/91867/displaying-image-results-in-real-time/).
Even when I am only capturing the image from the camera, with no processing at all, the latency can be perceived. I've seen topics on StackOverflow saying that threading should be helpful and whatnot, but I am completely lost and nothing has helped. Maybe I am looking at the wrong stuff. I know that the monitor itself already sums ~16ms to the image and other things should also be considered, but my target is 40 ms or less.
TL,DR: I am looking for some way to reduce latency from imshow() or to substitute it for something else, that I still do not know (I've been looking at another GUIs, like DirectShow) and I would really appreciate at least a direction at where and what I should look for. Any kind of suggestion or tip would be great.
I am currently using OpenCV 3.3.0, with the latest Qt and the MSVC2015 compiler (due to some circunstances with previous camera SDK, but looking to change ASAP). The project restricts my code to mainly Windows and C++. This is the camera I am currently working with: http://www.elpcctv.com/1080p-mini-usb-camera-full-hd-usb20-ov2710-color-sensor-mjpeg-format-and-36mm-lens-p-207.html.