My this problem is same as 1: How to set specific gpu in tensorflow? but it didn't solve my problem.
I have 4 GPUs in my PC and I want to run code on GPU 0 but whenever I run my tensorflow code, my code is always running only on GPU 2. As reading these (2, 3, 4) solutions and information I tried to solve my problem by adding:
os.environ['CUDA_VISIBLE_DEVICES']= '0'
in python code- or
CUDA_VISIBLE_DEVICES
as environment variable in PyCharm project configuration settings. - furthermore I also add
CUDA_LAUNCH_BLOCKING=2
in code or environment variable to block the GPU 2. Is it right way to block any GPU?
Above solutions are not working for me. Code is always running on GPU 2. I checked it by watch nvidia-smi
.
My system environment is
- Ubuntu 16.04
- RTX2080Ti (all 4 GPUs)
- Driver version 418.74
- CUDA 9.0 and CuDNN 7.5
- Tensorflow-gpu 1.9.0
Any suggestions for this problem? It's wired that adding environment variable in project settings in PyCharm or in python code... still only GPU 2 is visible. When I remove CUDA_VISIBLE_DEVICES
then tensorflow detects all 4 GPUs but code run on only GPU 2.