I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
8 Answers
If you want to force Keras to use CPU
Way 1
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
before Keras / Tensorflow is imported.
Way 2
Run your script as
$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
See also

- 221
- 3
- 14

- 124,992
- 159
- 614
- 958
-
30Didn't work for me (Keras 2, Windows) - had to set `os.environ['CUDA_VISIBLE_DEVICES'] = '-1'` as in an answer below – desertnaut Oct 11 '17 at 12:13
-
3What issue is #152 referring to? A link would be nice. – Martin R. Nov 29 '17 at 18:58
-
I don't see any reference to `CUDA_DEVICE_ORDER=PCI_BUS_ID` in issue #152 – Thawn Nov 04 '18 at 09:23
-
I am in a ipython3 terminal and I've set ```import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"] = "" ```, now how do I "undo" this ? I would like Keras to use the GPU again. – Gabriel Cretin May 20 '19 at 09:23
-
@MartinThoma I mean without having to leave the ipython, I had many things run in it so I would like to set back to a "GPU enabled" environment. I tried deleting the keys in the os.environ dictionary, in vain. – Gabriel Cretin May 22 '19 at 09:14
-
I see. You will have to reload tensor flow / keras after changing the environment variables – Martin Thoma May 22 '19 at 11:35
-
As a note: I had to perform these lines **BEFORE** I declared my keras model. If you do it afterwards it will use the GPU by default – Greg Jul 21 '20 at 21:27
This worked for me (win10), place before you import keras:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

- 8,950
- 115
- 65
- 78

- 919
- 6
- 4
-
-
4With Win, forces TF to use CPU and ignore any GPU. Didn't have luck with 0 or blank, but -1 seemed to do the trick. – Neuraleptic May 16 '18 at 19:54
-
1Worked on Win10 x64 for me. I also didn't have any luck win 0 or blank and only -1 worked. – Cypher Aug 11 '18 at 06:55
-
4
-
1I have two GPU s in my machine, setting the 'CUDA_VISIBLE_DEVICES' = 0/1 is referring to the physical ID of the available GPU's. Setting it to -1 uses CPU. – Prashanth Muthurajaiah Nov 28 '19 at 09:08
-
A rather separable way of doing this is to use
import tensorflow as tf
from keras import backend as K
num_cores = 4
if GPU:
num_GPU = 1
num_CPU = 1
if CPU:
num_CPU = 1
num_GPU = 0
config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,
inter_op_parallelism_threads=num_cores,
allow_soft_placement=True,
device_count = {'CPU' : num_CPU,
'GPU' : num_GPU}
)
session = tf.Session(config=config)
K.set_session(session)
Here, with booleans
GPU
and CPU
, we indicate whether we would like to run our code with the GPU or CPU by rigidly defining the number of GPUs and CPUs the Tensorflow session is allowed to access. The variables num_GPU
and num_CPU
define this value. num_cores
then sets the number of CPU cores available for usage via intra_op_parallelism_threads
and inter_op_parallelism_threads
.
The intra_op_parallelism_threads
variable dictates the number of threads a parallel operation in a single node in the computation graph is allowed to use (intra). While the inter_ops_parallelism_threads
variable defines the number of threads accessible for parallel operations across the nodes of the computation graph (inter).
allow_soft_placement
allows for operations to be run on the CPU if any of the following criterion are met:
there is no GPU implementation for the operation
there are no GPU devices known or registered
there is a need to co-locate with other inputs from the CPU
All of this is executed in the constructor of my class before any other operations, and is completely separable from any model or other code I use.
Note: This requires tensorflow-gpu
and cuda
/cudnn
to be installed because the option is given to use a GPU.
Refs:

- 1,123
- 1
- 8
- 12
-
1This is a nice solution as just defining "CUDA_VISIBLE_DEVICES" causes CUDA_ERROR_NO_DEVICE followed by a lot of diagnostics before continuing on to executing on the CPU. Though... both methods work! – jsfa11 Mar 22 '18 at 17:24
-
1This is the only consistent solution that works for me. Keep coming back to it. – Authman Apatira Dec 22 '18 at 19:17
-
1Can you please explain what the other parameters mean? like `allow_soft_placement`, `intra_op_parallelism_threads`, `inter_op_parallelism_threads` – Nagabhushan S N Feb 02 '19 at 10:19
-
are the `inter`/`intra_op_parallelism_threads` refer to CPU or GPU operations? – bluesummers Mar 16 '19 at 10:12
-
1
-
-
If I want to force CPU here, will this ensure that no memory is allocated on the GPU? Because I need to run a separate model on the CPU because the GPU will be occupied in the same python context. I was wondering whether to use a subprocess based on the first answer. But if this works, it will be alot easier to just fork and create a new session for the subprocess? – CMCDragonkai Oct 27 '20 at 01:46
-
Just import tensortflow and use keras, it's that easy.
import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)

- 12,340
- 5
- 53
- 75

- 1,539
- 1
- 16
- 25
-
6When I set the `tf.device('/cpu:0')`, I could still see memory being allocated to python later with `nvidia-smi`. – CMCDragonkai Apr 27 '18 at 03:04
-
-
5Doesn't seem to work for me either, still uses gpu when I set it to use cpu – liyuan Oct 15 '18 at 02:39
-
Should not be also model definition and compile executed under the same `with`? – matt525252 Apr 13 '20 at 15:06
As per keras tutorial, you can simply use the same tf.device
scope as in regular tensorflow:
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on GPU:0
with tf.device('/cpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on CPU:0

- 4,557
- 2
- 32
- 54
-
2How can this be done within Keras with Tensorflow as a backend, rather than using Tensorflow to call Keras layers? – mikal94305 Nov 19 '16 at 22:33
-
I don't understand your question. The code inside `with` can be any Keras code. – sygi Nov 19 '16 at 23:06
-
1How can this be done with a trained model loaded from disk? I am currently training on gpu but want to verify afterwards on CPU – ghostbust555 Dec 10 '16 at 06:11
-
4I was able to switch training from gpu to cpu in the middle of training by using the above mentioned method where I save the model in between with model.save then reload it with a different tf.device using keras.models.load_model . The same apply if you want to train then predict on a different device. – TheLoneNut Oct 05 '17 at 16:08
I just spent some time figure it out.
Thoma's answer is not complete.
Say your program is test.py
, you want to use gpu0 to run this program, and keep other gpus free.
You should write CUDA_VISIBLE_DEVICES=0 python test.py
Notice it's DEVICES
not DEVICE

- 7,733
- 10
- 39
- 73

- 57
- 8
For people working on PyCharm, and for forcing CPU, you can add the following line in the Run/Debug configuration, under Environment variables:
<OTHER_ENVIRONMENT_VARIABLES>;CUDA_VISIBLE_DEVICES=-1

- 3,168
- 3
- 18
- 35
To disable running on the GPU (tensor flow 2.9) use tf.config.set_visible_devices([], 'GPU')
. The empty list argument is to say that there will be no GPUs visible for this run.
Do this early in your code, e.g. before Keras initializes the tf configuration.
See docs https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/config/set_visible_devices

- 1
- 1