1

I recently bought a computer with an NVIDIA GeForce GTX1050. I have been trying to use it with tensorflow and keras through a local jupyter notebook. I have got tensorflow-gpu and keras-gpu in my environment. I have all the correct versions of cuda and cudnn installed with all the necessary paths in my environment variables as per https://www.tensorflow.org/install/gpu. When I run the nvidia-smi command I get this: output of nvidia-smi. When I look at output of nvidia-smi -q it says 'Not available in WDDM driver model'.

After doing some research, it seems all GeForce products support only WDDM, so how do people use their GeForce products with deep learning? I have seen countless youtube videos and forum posts with people claiming the use their GeForce GPU with tensorflow and keras. What am I missing?

  • Native environment often causes issues while using GPU. Try installing Anaconda and install all the dependencies in the condo environment and then try to run your code there. As conda installs all the additional dependencies of any package alongside automatically, I hope it fixes your problem. – Rishab P Jul 13 '20 at 17:41
  • @RishabP. this is what I have done from the start, no joy. – Charlie Baskerville Jul 13 '20 at 21:06

1 Answers1

0

Are you concerned that the models are not running in your GPU. This SO question seems similar to what you have https://stackoverflow.com/a/44228331/8416255

rsn
  • 369
  • 1
  • 3
  • 17