I recently moved to tensorflow 2 from tensorflow 1.x -gpu. But TF is not recognizing the GPU anymore. I use Spyder with a NON-ANACONDA interpreter. I use pip to manage my packages. OS is Windows 10. The GPU is a GTX 1660 Ti, so it does support CUDA.
- Py version -- Python 3.7
- Current TF version -- 2.4.0
- Current CUDA version -- 11.1
- Current CUDNN version -- 8.0.4.30
-I have copied all the CUDNN dlls after extraction as specified here -- https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
-All the necessary path variables have been added as well. I have Visual Studio 2019 as well as Visual Code installed from before (when I used to run tensorflow on my GPU)
NVIDIA SMI output :-
C:\WINDOWS\system32>nvidia-smi
Tue Feb 9 17:08:11 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 461.40 Driver Version: 461.40 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 166... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 76C P0 26W / N/A | 273MiB / 6144MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 15272 C+G ...n64\EpicGamesLauncher.exe N/A |
| 0 N/A N/A 16500 C ...iles\Python37\pythonw.exe N/A |
+-----------------------------------------------------------------------------+
nvcc --version :-
C:\WINDOWS\system32>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:54:10_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.relgpu_drvr455TC455_06.29190527_0
tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)
Out[9]: False
As per https://www.tensorflow.org/install/source I have the correct combination of CUDNN and CUDA installed. Yet, tf doesn't recognize my gpu. Thanks in advance for your time :)
Edit 1 -- Added the log received after importing tensorflow in cmd Edit 2 -- Added log level as specified in Tensorflow logging messages do not appear
>>> os.environ['TF_CPP_MIN_VLOG_LEVEL']='3'
>>> os.environ['TF_CPP_MIN_LOG_LEVEL']='0'
>>> import tensorflow as tf
2021-02-09 18:10:16.489113:Itensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-02-09 18:10:16.558283: I tensorflow/core/platform/cloud/gcs_file_system.cc:804] GCS cache max size = 0 ; block size = 67108864 ; max staleness = 0
2021-02-09 18:10:16.562188: I .\tensorflow/core/platform/cloud/ram_file_block_cache.h:64] GCS file block cache is disabled
2021-02-09 18:10:16.565031: I tensorflow/core/platform/cloud/gcs_file_system.cc:844] GCS DNS cache is disabled, because GCS_RESOLVE_REFRESH_SECS = 0 (or is not set)
2021-02-09 18:10:16.568337: I tensorflow/core/platform/cloud/gcs_file_system.cc:874] GCS additional header DISABLED. No environment variable set.
Edit 2 -- Adding evidence where PyTorch registers the GPU as mentioned in How to check if pytorch is using the GPU?
import torch
torch.cuda.current_device()
Out[2]: 0
torch.cuda.device(0)
Out[3]: <torch.cuda.device at 0x222be69a8c8>
torch.cuda.device_count()
Out[4]: 1
torch.cuda.get_device_name(0)
Out[5]: 'GeForce GTX 1660 Ti'
torch.cuda.is_available()
Out[6]: True