so using a regular docker I came to a conclusion that 2 different CUDA versions aren't compatible for the following run concept: use the local GPU with CUDA 11 for example with the docker environment with lower OS version and lower CUDA version, because the container must approach the local GPU thorough its CUDA, and because they aren't compatible, the whole thing is impossible.
Is this exactly the issue nvidia-docker2 is addressing ?
Suppose my OS is ubuntu 20+, CUDA 11+ and I need to run code that must run with CUDA 8 which is only compatible with UBUNTU 16 and I have another code that compatible with CUDA 10 on Ubuntu 18.
As much as I saw and correct me if I'm wrong, nvidia-docker2 would make me being able to run nvidia-smi command on the container itself, thus the container simulates ("thinks") that the gpu is local to it, thus I can create one container with ubuntu 16, another one with 18, and my GPU will happily participate with any CUDA, cudatoolkit and cudnn versions as I want (install on the containers) ? I think it was also written that those components can be only in the containers, thus it doesn't matter what CUDA version I have on my computer, am I wrong ?
And if that is the case, another question will be, would I be able with docker and cuda-container-toolkit to run the interpreter from the container as I can do in the moment using docker and PyCharm, i.e does it support this functionality additionally for being able to run different CUDAs on different containers ?
Or am I wrong and hoped to optimistically that it is possible to debug different docer environments with incompatible cuda versions with the same local GPU without installing diffenet UBUNTU versions on the hard drive ?
Or does the last suggestion is the only one possible (few Ubuntus on the same computer)? Sounds as the most confident and easy solution anyway, but correct me where I am wrong.