I recently bought a computer with an NVIDIA GeForce GTX1050. I have been trying to use it with tensorflow and keras through a local jupyter notebook. I have got tensorflow-gpu and keras-gpu in my environment. I have all the correct versions of cuda and cudnn installed with all the necessary paths in my environment variables as per https://www.tensorflow.org/install/gpu. When I run the nvidia-smi
command I get this:
output of nvidia-smi. When I look at output of nvidia-smi -q it says 'Not available in WDDM driver model'.
After doing some research, it seems all GeForce products support only WDDM, so how do people use their GeForce products with deep learning? I have seen countless youtube videos and forum posts with people claiming the use their GeForce GPU with tensorflow and keras. What am I missing?