i've worked with tensorflow for a while and everything worked properly until i tried to switch to the gpu version.
Uninstalled previous tensorflow, pip installed tensorflow-gpu (v2.0) downloaded and installed visual studio community 2019 downloaded and installed CUDA 10.1 downloaded and installed cuDNN
tested with CUDA sample "deviceQuery_vs2019" and got positive result. test passed Nvidia GeForce rtx 2070
run test with previous working file and get the error tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
after some research i've found that the supported CUDA version is 10.0 so i've downgraded the version, changed the CUDA path, but nothing changed
using this code
import tensorflow as tf
print("Num GPUs Available: ",
len(tf.config.experimental.list_physical_devices('GPU')))
i get
2019-10-01 16:55:03.317232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-01 16:55:03.420537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
Num GPUs Available: 1
name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-10-01 16:55:03.421029: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-01 16:55:03.421849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
[Finished in 2.01s]
CUDA seems to recognize the card, so does tensorflow, but i cannot get rid of the error: tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
what am i doing wrong? should i stick with cuda 10.0? am i missing a piece of the installation?