19

I'm running a CNN with keras-gpu and tensorflow-gpu with a NVIDIA GeForce RTX 2080 Ti on Windows 10. My computer has a Intel Xeon e5-2683 v4 CPU (2.1 GHz). I'm running my code through Jupyter (most recent Anaconda distribution). The output in the command terminal shows that the GPU is being utilized, however the script I'm running takes longer than I expect to train/test on the data and when I open the task manager it looks like the GPU utilization is very low. Here's an image: enter image description here

Note that the CPU isn't being utilized and nothing else on the task manager suggests anything is being fully utilized. I don't have an ethernet connection and am connected to Wifi (don't think this effects anything but I'm not sure with Jupyter since it runs through the web broswers). I'm training on a lot of data (~128GB) which is all loaded into the RAM (512GB). The model I'm running is a fully convolutional neural network (basically a U-Net architecture) with 566,290 trainable parameters. Things I tried so far: 1. Increasing batch size from 20 to 10,000 (increases GPU usage from ~3-4% to ~6-7%, greatly decreases training time as expected). 2. Setting use_multiprocessing to True and increasing number of workers in model.fit (no effect).

I followed the installation steps on this website: https://www.pugetsystems.com/labs/hpc/The-Best-Way-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-1187/#look-at-the-job-run-with-tensorboard

Note that this installation specifically DOESN'T install CuDNN or CUDA. I've had trouble in the past with getting tensorflow-gpu running with CUDA (although I haven't tried in over 2 years so maybe it's easier with the latest versions) which is why I used this installation method.

Is this most likely the reason why the GPU isn't being fully utilized (no CuDNN/CUDA)? Does it have something to do with the dedicated GPU memory usage being a bottleneck? Or maybe something to do with the network architecture I'm using (number of parameters, etc.)?

Please let me know if you need any more information about my system or the code/data I'm running on to help diagnose. Thanks in advance!

EDIT: I noticed something interesting in the task manager. An epoch with batch size of 10,000 takes around 200s. For the last ~5s of each epoch, the GPU usage increases to ~15-17% (up from ~6-7% for the first 195s of each epoch). Not sure if this helps or indicates there's a bottleneck somewhere besides the GPU.

talonmies
  • 70,661
  • 34
  • 192
  • 269
A. LaBella
  • 427
  • 1
  • 4
  • 13

6 Answers6

16

You for sure need to install CUDA/Cudnn to fully utilize GPU with tensorflow. You can double check that the packages are installed correctly and if the GPU is available to tensorflow/keras by using

import tensorflow as tf

tf.config.list_physical_devices("GPU")

and the output should look something like [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] if the device is available.

If you've installed CUDA/Cudnn correctly then all you need to do is change copy --> cuda in the dropdown menu in the task manager which will show the number of active cuda cores. The other indicators for the GPU will not be active when running tf/keras because there is no video encoding/decoding etc to be done; it is simply using the cuda cores on the GPU so the only way to track GPU usage is to look at the cuda utilization (when considering monitoring from the task manager)

task manager

Taylr Cawte
  • 582
  • 4
  • 16
  • 2
    What if my Cuda is at 90% but my GPU is at 6%? How is that possible? – Rodrigo Ruiz Feb 06 '21 at 21:00
  • 2
    @RodrigoRuiz CUDA is a parrallel computing platform which allows for the use of a GPU for general purpose processing. The GPU 'tab' in the task manager shows the usage of the GPU for graphics processing, not general processing. Since there is no graphics processing being done the task manager thinks overall GPU usage is low, by switching to the CUDA dropdown you can see that the majority of your cores will be utilized (if tf/keras installed correctly). – Taylr Cawte Feb 07 '21 at 01:37
  • Thank you! So if my Cuda chart is on 90% means my GPU is working fulltime on my network training? – Rodrigo Ruiz Feb 08 '21 at 21:19
  • @RodrigoRuiz it means that 90% of your cuda cores are being used; which if you are training your network probably means that it's working fulltime! – Taylr Cawte Feb 09 '21 at 03:26
  • 3
    I was under the mistaken impression that my GPU wasn't being utilized because my "Cuda" dropdown was hidden (with "Copy" in its place). Thanks for clearing this up. – Mike McCartin Aug 05 '21 at 16:09
5

I would first start by running one of the short "tests" to ensure Tensorflow is utilizing the GPU. For example, I prefer @Salvador Dali's answer in that linked question

import tensorflow as tf
with tf.device('/gpu:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)

with tf.Session() as sess:
    print (sess.run(c))

If Tensorflow is indeed using your GPU you should see the result of the matrix multplication printed. Otherwise a fairly long stack trace stating that "gpu:0" cannot be found.


If this all works well that I would recommend utilizing Nvidia's smi.exe utility. It is available on both Windows and Linux and AFAIK installs with the Nvidia driver. On a windows system it is located at

C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe

Open a windows command prompt and navigate to that directory. Then run

nvidia-smi.exe -l 3

This will show you a screen like so, that updates every three seconds.

enter image description here

Here we can see various information about the state of the GPUs and what they are doing. Of specific interest in this case is the "Pwr: Usage/Cap" and "Volatile GPU-Util" columns. If your model is indeed using the/a GPU these columns should increase "instantaneously" once you start training the model.

You most likely will see an increase in fan speed and temperature unless you have a very nice cooling solution. In the bottom of the printout you should also see a Process with a name akin to "python" or "Jupityr" running.


If this fails to provide an answers as to the slow training times than I would surmise the issue lies with the model and code itself. And I think its is actually the case here. Specifically viewing the Windows Task Managers listing for "Dedicated GPU Memory Usage" pinged at basically maximum.

KDecker
  • 6,928
  • 8
  • 40
  • 81
3

If you have tried @KDecker's and @OverLordGoldDragon's solution, low GPU usage is still there, I would suggest first investigating your data pipeline. The following two figures are from tensorflow official guides data performance, they are well illustrated how data pipeline will affect the GPU efficiency.

enter image description here

enter image description here

As you can see, prepare data in parallel with the training will increase the GPU usage. In this situation, CPU processing is becoming the bottleneck. You need to find a mechanism to hide the latency of preprocessing, such as changing the number of processes, size of butter etc. The efficiency of CPU should match the efficiency of the GPU. In this way, the GPU will be maximally utilized.

Take a look at Tensorpack, and it has detailed tutorials of how to speed up your input data pipeline.

zihaozhihao
  • 4,197
  • 2
  • 15
  • 25
3

There seems to have been a change to the installation method you referenced : https://www.pugetsystems.com/labs/hpc/The-Best-Way-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-1187 It is now much easier and should eliminate the problems you are experiencing.

Important Edit You don't seem to be looking at the actual compute of the GPU, look at the attached image: enter image description here

Elegant Code
  • 678
  • 6
  • 18
2

Everything works as expected; your dedicated memory usage is nearly maxed, and neither TensorFlow nor CUDA can use shared memory -- see this answer.

If your GPU runs OOM, the only remedy is to get a GPU with more dedicated memory, or decrease model size, or use below script to prevent TensorFlow from assigning redundant resources to the GPU (which it does tend to do):

## LIMIT GPU USAGE
config = tf.ConfigProto()  
config.gpu_options.allow_growth = True  # don't pre-allocate memory; allocate as-needed
config.gpu_options.per_process_gpu_memory_fraction = 0.95  # limit memory to be allocated
K.tensorflow_backend.set_session(tf.Session(config=config)) # create sess w/ above settings

The unusual increased usage you observe may be shared memory resources being temporarily accessed due to exhausting other available resources, especially with use_multiprocessing=True - but unsure, could be other causes

OverLordGoldDragon
  • 1
  • 9
  • 53
  • 101
0

read following two pages ,u will get idea to properly setup with GPU https://medium.com/@kegui/how-do-i-know-i-am-running-keras-model-on-gpu-a9cdcc24f986

https://datascience.stackexchange.com/questions/41956/how-to-make-my-neural-netwok-run-on-gpu-instead-of-cpu

maddy23
  • 138
  • 2
  • 13
  • Thanks, but I guess what my problem may boil down to is, will the GPU automatically not be used/be slower if I don't have CUDA or CuDNN installed (I haven't been able to find a solid answer to this anywhere)? The GPU is being used by keras and tensorflow, I'm just not sure why it's not being used fully/properly. – A. LaBella Oct 08 '19 at 16:21
  • try running different model with pytorch, – maddy23 Oct 08 '19 at 16:28