-1

I am currently training a CNN on Paperspace/Gradient Notebooks (Python 3.8.10, Tensorflow 2.7.0)

The training takes surprisingly long and it appears that >200% of the CPU is utilized but only 15-20% of the GPU. Tensorflow seems to recognize the GPU: enter image description here

Also, I followed their template to set up the training with tf.device():

try:
  with tf.device('/device:GPU:0'):
        model_Sezer.fit(train_dataset,
           epochs = 100,
           validation_data = validation_dataset,
           callbacks = [tensorboard_callback, checkpoint_Accuracy,],
           class_weight = class_weight
           )
except RuntimeError as e:
  print(e)

Does anyone know how I can fully train on the GPU?

LukeMahn
  • 95
  • 6

1 Answers1

0

Make sure that you have an NVIDIA GPU that supports CUDA and have the CUDA Toolkit installed. Read this article for more information about the prerequisite.

import os
import tensorflow as tf

# suppress info and warnings outputted by tensorflow
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
# enable memory growth for gpu devices
# source: https://stackoverflow.com/a/55541385/8849692
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if gpu_devices:
    for device in gpu_devices:
        tf.config.experimental.set_memory_growth(device, True)

This is what I use in all of my projects. This needs to be at the top of your main file and should just work if it is able to find the GPU device(s), no need to run it on tf.device().

Here is an example from one of my projects.