38

I've successfully installed tensorflow (GPU) on Linux Ubuntu 16.04 and made some small changes in order to make it work with the new Ubuntu LTS release.

However, I thought (who knows why) that my GPU met the minimum requirement of a compute capability greater than 3.5. That was not the case since my GeForce 820M has just 2.1. Is there a way of making tensorflow GPU version working with my GPU?

I am asking this question since apparently there was no way of making tensorflow GPU version working on Ubuntu 16.04 but by searching the internet I found out that was not the case and indeed I made it almost work were it not for this unsatisfied requirement. Now I am wondering if this issue with GPU compute capability could be fixed as well.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
mickkk
  • 1,172
  • 2
  • 17
  • 38
  • I looked up that GPU and it seems very weak. If I were you I would just use CPU tensorflow since I don't think there will be much of a performance difference. Might even be faster. – chasep255 Jul 23 '16 at 15:10
  • @chasep255 I was able to use mxnet on GPU (Python). It ran a bit faster. Yeah the difference is not that much, but when running a lot of epochs even a small difference can help. If adapting the package to my machine does not require a lot of effort I think I could give it a try. – mickkk Jul 23 '16 at 15:22
  • @mickkk I noticed the tensorflow also supports opencl... Not sure if this can be used as an alternative. Going to try building it like that now. Will report back if it works ok. – Ru Hasha Feb 27 '17 at 08:25

3 Answers3

30

Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU.

cuDNN also requires a GPU of cc3.0 or higher:

cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.

  • Kepler = cc3.x
  • Maxwell = cc5.x
  • Pascal = cc6.x
  • TK1 = cc3.2
  • TX1 = cc5.3

Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN.

Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN.

Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
  • Now I wonder why I was able to run mxnet on GPU using cuDNN though... In principle you could not even install tensorflow GPU on the last Ubuntu LTS.. – mickkk Jul 23 '16 at 15:24
  • 3
    cuDNN won't work on a cc2.1 GPU. Perhaps mxnet has a gpu-enabled path which does not require cuDNN. This would seem to be the case [here](http://mxnet.readthedocs.io/en/latest/how_to/build.html). Note that GPU support is claimed for cc2.0 and greater, but that it uses "CUDNN to **accelerate** the GPU computation". – Robert Crovella Jul 23 '16 at 15:39
  • @RobertCrovella warning: the first two links are 404 – JarsOfJam-Scheduler Aug 05 '19 at 07:52
11

Sep.2017 Update: No way to do that without problems and pains. I've tried hard all the ways and even apply below trick to force it run but finally I had to give up. If you are serious with Tensorflow just go ahead and buy 3.0 compute capability GPU.

This is a trick to force tensorflow run on 2.0 compute capability GPU (not officially):

  1. Find the file in Lib/site-packages/tensorflow/python/_pywrap_tensorflow_internal.pyd (orLib/site-packages/tensorflow/python/_pywrap_tensorflow.pyd)
  2. Open it with Notepad++ or something similar

  3. Search for the first occurence of 3\.5.*5\.2 using regex

  4. You see the 3.0 before 3.5*5.2, change it to 2.0

I changed as above and can do simple calculation with GPU, but get stuck with strange and unknown issues when try with practical projects(those projects run well with 3.0 compute capability GPU)

Tin Luu
  • 1,557
  • 16
  • 22
  • 10
    I strongly advise not to do so. After aplying this trick on my laptop with GeForce 800M results where incorrect. – Marcin Tarka Jul 01 '17 at 10:49
  • 2
    Yes, it's sad to find out that. My GPU is also found to function incorrectly with complex model (strange bugs), while with the same model (same code), it can run smoothly with GPU 3.0 – Tin Luu Jul 06 '17 at 03:11
  • 2
    Thanks guys for reporting back the issues in your experiment above. It helps me to simple let it go and understand that I have to get a new GPU if i want to run TF. :) @TinLuu, please consider editing your answer to reflect issues so that others who might skip these comments do not go that way either! – mayank Sep 27 '17 at 06:12
  • Thanks for your suggestion! I have updated the answer so that one can easily make decision – Tin Luu Oct 11 '17 at 09:24
0

I found it how to install Tensorflow-gpu on a compute capability 2.1 NVIDIA GeForce 525M for python ,the trick is simple use a archived version of tensorflow, I used 1.9.0 The python command for package using PIP is pip install tensorflow-gpu==1.9.0 and cuDNN version is 7.4.1