I develop a Hand-tracking application, with C++ and OpenGL. (and QT, Eigen, OpenCV)
OpenGL is used in order to render a 3D model (for every iteration of the tracking loop).
The application runs in just 1 thread.
I'm interested in doing some very time-consuming experiments, so I was wondering if it is possible to parallelize things, in the sense of starting many instances of the same executable, and running them with different parameters.
Just by trying to do this, it seems that it works, but I'm not sure if different instances interfere with each other on the GPU. To be more descriptive, I wonder about the following: if I do some experiments by running only one instance at each time, and then repeat the same experiments by running many instances concurrently, are the results going to be the same numerically?
Of course I'll try to verify it through experiments, I was wondering though if anybody can pinpoint me to a suitable read (I didn't find something truly relevant).
Any ideas on this matter?
Answering the first comment (@KillianDS)
The details of the experiment are really mathematical and would cause 'noise' in the topic.
The idea is that you have a tracking-algorithm that tries to find correspondences between the previous and the current frame. By using these correspondences, the algorithm takes the 3D model from the pose of the previous frame (already known), and it transforms it in such a way, so that it fits the current frame. There are some (mathematical) parameters affecting this, and the experiments are about having a lot of testing frames, and running the algorithm on these with many different parameters, so that you can find the optimal parameters (value or range of values)
During the experiments, you use OpenGL to project the 3D model, so that it fits the current frame-image. What you see is rendered as usual, the actual job though is done with the use of an offscreen buffer in GPU.
Up to now I run the experiments using multiprocessing (many instances at the same time), but when I run just one instance at the time, I couldn't reproduce the exact same number, because of a bug that I just found. (same test is now ongoing - but very time-consuming)
However I was wondering if you can really trust GPU when you run many instances at the same time, or things in GPU-memory can be messed up
Answering the second comment (@Lajos Arpad)
To redefine in short the problem, I don't want to share things, but be sure that different instances (in case of multiprocessing, that you mention) don't affect each other (that is no sharing at all)