19

In tensorflow 1.X with standalone keras 2.X, I used to switch between training on GPU, and running inference on CPU (much faster for some reason for my RNN models) with the following snippet:

keras.backend.clear_session()

def set_session(gpus: int = 0):
    num_cores = cpu_count()

    config = tf.ConfigProto(
        intra_op_parallelism_threads=num_cores,
        inter_op_parallelism_threads=num_cores,
        allow_soft_placement=True,
        device_count={"CPU": 1, "GPU": gpus},
    )

    session = tf.Session(config=config)
    k.set_session(session)

This ConfigProto functionality is no longer available in tensorflow 2.0 (there I'm using the integrated tensorflow.keras). In the beginning, it is possible to run tf.config.experimental.set_visible_devices() in order to e.g. disable the GPU, but any subsequent calls to set_visible_devices result in RuntimeError: Visible devices cannot be modified after being initialized. Is there a way of re-initializing the visible devices or is there another way of switching the devices available?

adamconkey
  • 4,104
  • 5
  • 32
  • 61
valend.in
  • 383
  • 1
  • 2
  • 8

3 Answers3

31

You can use tf.device to explicitly set which device you want to use. For example:

import tensorflow as tf    

model = tf.keras.Model(...)

# Run training on GPU
with tf.device('/gpu:0'):
    model.fit(...)

# Run inference on CPU
with tf.device('/cpu:0'):
    model.predict(...)

If you only have one CPU and one GPU, the names used above should work. Otherwise, device_lib.list_local_devices() can give you a list of your devices. This post gives a nice function for listing just the names, which I adapt here to also show CPUs:

from tensorflow.python.client import device_lib

def get_available_devices():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos if x.device_type == 'GPU' or x.device_type == 'CPU']
adamconkey
  • 4,104
  • 5
  • 32
  • 61
  • Do you know by default the tensorflow is running on GPU or CPU? – cloudscomputes Aug 31 '21 at 15:26
  • is there a way to set that for the whole scope of the script? I mean, without the with statement? Something like `tf.set_device('/cpu:0')`? – Lucas Azevedo Dec 07 '21 at 01:01
  • 1
    This could be a practical solution, but note that as soon as you enter `tf.device('/cpu:0'):`, TF will still allocate GPU memories. I think this is a limitation of TF. – Jongwook Choi Feb 15 '22 at 02:33
  • Assuming you want to run on CPU only, you can get around TensorFlow's habit of allocating GPU memory by setting the environment variable CUDA_VISIBLE_DEVICES to the empty string before initializing TensorFlow. – pygosceles Aug 05 '22 at 21:51
  • @LucasAzevedo if you want to set the device for the whole scope you can use: ```physical_devices = tf.config.list_physical_devices('GPU'); tf.config.set_visible_devices(physical_devices[0], 'GPU')```. This is implemented since tensorflow version 2.1. – BeCurious Jan 12 '23 at 16:59
2

Does using tf.device can help you?

With that, you can set some operations either on CPU or on GPU.

AlexisBRENON
  • 2,921
  • 2
  • 18
  • 30
0

I would just restart the kernel, this worked for me

Lowity
  • 91
  • 9