On my computer, I have installed only NVIDIA graphics card drivers (only for the display, not CUDA and CUDNN). I followed your instructions, i.e.
echo 'Name of the TENSORFLOW ENVIRONMENT:'
read ENVNAME
#CREATING THE ENV
conda create --name $ENVNAME -y
#ACTIVATE THE eNV
conda activate $ENVNAME6
# INSTALLING CUDA DRIVERS
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0 -y
# INSTALLING TENSORFLOW
conda install tensorflow-gpu -y
conda install -c anaconda ipykernel -y
conda install ipykernel -y
# ADDING ENV TO JUPYTER LIST
python3 -m ipykernel install --user --name=$ENVNAME
# 'VERIFY GPU SUPPORT'
python3 -c "import tensorflow as tf;
print(tf.config.list_physical_devices('GPU'))"
But I am getting back the following message:
>python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
Output> []
Does that mean that my python can't see my Graphic Card?
At this point it's worth mentioning that my graphics card is an NVIDIA geforce gtx 560, and on the NVIDIA site it says the compatible cards are "geforce gtx 560 TI, geforce gtx 560M". Does this mean my graphics card is not CUDA compatible, and if so why when I install numba and run the following code it seems to work:
from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
# function optimized to run on gpu
@jit(target_backend='cuda')
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)