441

I have installed tensorflow in my ubuntu 16.04 using the second answer here with ubuntu's builtin apt cuda installation.

Now my question is how can I test if tensorflow is really using gpu? I have a gtx 960m gpu. When I import tensorflow this is the output

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally

Is this output enough to check if tensorflow is using gpu ?

Timbus Calin
  • 13,809
  • 5
  • 41
  • 59
Tamim Addari
  • 7,591
  • 9
  • 40
  • 59

31 Answers31

437

No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices.

When using tensorflow2:

print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))

For tensorflow1, to find out which device is used, you can enable log device placement like this:

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Check your console for this type of output.

Martin
  • 659
  • 7
  • 14
Yao Zhang
  • 5,591
  • 1
  • 16
  • 22
  • 21
    I tried this and it prints absolutely nothing. Any idea why that might be? – Qubix Feb 01 '17 at 08:01
  • 9
    Did you do it on a jupyter notebook ? – Tamim Addari Mar 17 '17 at 09:51
  • 4
    Same as @Qubix, it doesn't print anything. I'm executing it in a Jupyter notebook. I tried to print sess but I got nothing relevant. – richar8086 Apr 02 '17 at 18:51
  • 29
    The output may be produced on the console from where you ran the Jupyter Notebook. – musically_ut Apr 09 '17 at 10:31
  • 2
    Qubix and richar8086, it prints on the terminal where you started jupyter notebook – wafflecat Oct 05 '17 at 06:50
  • It printed 2018-02-10 05:21:06.855163: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA Device mapping: no known devices. 2018-02-10 05:21:06.856681: I tensorflow/core/common_runtime/direct_session.cc:297] Device mapping . is it GPU enabled – user2478236 Feb 10 '18 at 05:22
  • 2
    AttributeError: module 'tensorflow' has no attribute 'Session' – Rocketq May 27 '18 at 20:04
  • Gee whiz. You'd think this would be a simpler query. – ijoseph Oct 31 '18 at 18:08
  • can anybody answer me why `device 0` mapping information appeared twice? why this is caused. – Bs He Nov 01 '18 at 22:05
  • Does this have to be for Ubuntu to work? I am using a windows device. I have noticed that when run Tensorflow that the GPU is churning hard, but whenever I run this command I do not show any results – Cameron Dec 17 '18 at 10:38
  • @Cameron That seems weird. It works on my computer (windows). You have tensorflow-gpu, right? – Axiumin_ Jan 02 '19 at 20:08
  • 22
    Can we get an updated answer for Tensorflow V2 (where tf.Sessions are not supported). – Roy Jul 03 '19 at 12:51
  • 27
    @iyop45 For tensorflow V2, the command is a bit modified: `sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))` – Vandan Revanur Mar 17 '20 at 18:29
  • @musically jupyter/ipython is not the only way. You can also run it from console with `python -c "import tensorflow as tf; tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))"` Just a note: the gpu variant is included in the main import for some time now. – Cadoiz Jul 12 '20 at 03:31
  • The print ignores XLA_GPUs – Nir Nov 03 '21 at 12:58
302

Apart from using sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) which is outlined in other answers as well as in the official TensorFlow documentation, you can try to assign a computation to the gpu and see whether you have an error.

import tensorflow as tf
with tf.device('/gpu:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)

with tf.Session() as sess:
    print (sess.run(c))

Here

  • "/cpu:0": The CPU of your machine.
  • "/gpu:0": The GPU of your machine, if you have one.

If you have a gpu and can use it, you will see the result. Otherwise you will see an error with a long stacktrace. In the end you will have something like this:

Cannot assign a device to node 'MatMul': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process


Recently a few helpful functions appeared in TF:

You can also check for available devices in the session:

with tf.Session() as sess:
  devices = sess.list_devices()

devices will return you something like

[_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 4670268618893924978),
 _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 6127825144471676437),
 _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 16148453971365832732),
 _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 10003582050679337480),
 _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 5678397037036584928)
aspiring_sarge
  • 2,355
  • 1
  • 25
  • 32
Salvador Dali
  • 214,103
  • 147
  • 703
  • 753
  • 21
    Result:[[ 22. 28.] [ 49. 64.]] – George Pligoropoulos Jun 05 '17 at 16:46
  • 6
    @GeorgePligor the result is not really important here. Either you have a result and the GPU was used or you have an error, which means that it was not used – Salvador Dali Jun 05 '17 at 18:23
  • 1
    This did not work for me. I ran this inside of my Docker Container that is exectued by the nvidia-docker and etcetc. However I get no error and the CPU is the one that does the work. I upped the matrices a bit (10k*10k) to ensure it calculates for a while. CPU util went up to 100% but the GPU stayed cool as always. – pascalwhoop Dec 13 '17 at 19:00
  • I got the "no devices matching" error when run it in console. In IDE like pycharm there is no error. I guess it's related to the Session I used, which is different in console. – cn123h Feb 24 '18 at 13:15
  • Easy to understand. If GPU available it will print something like `Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582 pciBusID: 0000:02:00.0 totalMemory: 10.92GiB freeMemory: 10.76GiB` – Leoli Aug 06 '18 at 04:04
  • This should be the accepted answer. Practical solutions. – AtilioA Sep 29 '19 at 18:19
  • if you're using a newer version of tf, you'll need something like `with tf.device('/device:XLA_GPU:0')` instead – eqzx Nov 14 '19 at 01:43
  • 3
    dosen't seem to work for tensorflow 2.1 at all, even after replacing `Session` with `tf.compat.v1.Session()` – Zarathustra Apr 23 '20 at 14:44
  • 2020-06-12 01:13:11.514723: I tensorflow/core/common_runtime/placer.cc:54] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0 [[22. 28.] [49. 64.]] – kamran kausar Jun 11 '20 at 19:50
  • tf.test.is_gpu_available() seems to test a GPU computation, and as hard as it is to accept the result `False` (after many attempts to install), it was correct for me. Side note: From my experiences, it's easier to install a TF 2.x version with GPU support and upgrade the deprecated usages of 1.x, than to getting a 1.x install with GPU to work. – Caranown Apr 01 '21 at 11:39
  • `tf.test.is_gpu_available` is deprecated. Use `tf.config.list_physical_devices('GPU')` instead. – topher217 Mar 18 '22 at 08:23
204

Following piece of code should give you all devices available to tensorflow.

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

Sample Output

[name: "/cpu:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 4402277519343584096,

name: "/gpu:0" device_type: "GPU" memory_limit: 6772842168 locality { bus_id: 1 } incarnation: 7471795903849088328 physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:05:00.0" ]

Sheraz
  • 2,149
  • 1
  • 8
  • 4
  • 3
    and if this command does not return any entry with "GPU", does it mean my machine simply does have GPU, or tensorflow is not able to locate it? – mercury0114 Dec 16 '18 at 17:54
  • @mercury0114 it may be either. for example, you may have a gpu but not have tensorflow-gpu properly installed. – jimijazz May 07 '19 at 15:41
  • 7
    I disagree, this does **not** answer the question: it's not about devices _available_ but devises **used**. And that can be an entirely different story! (e.g. TF will only use 1 GPU by default. – Mayou36 May 09 '19 at 10:00
  • name: "/device:GPU:0" device_type: "GPU" memory_limit: 10711446324 locality { bus_id: 1 links { }} incarnation: 17935632445266485019 physical_device_desc: "device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5"] – kamran kausar Jun 11 '20 at 19:53
137

Tensorflow 2.0

Sessions are no longer used in 2.0. Instead, one can use tf.test.is_gpu_available:

import tensorflow as tf

assert tf.test.is_gpu_available()
assert tf.test.is_built_with_cuda()

If you get an error, you need to check your installation.

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
ma3oun
  • 3,681
  • 1
  • 21
  • 33
123

I think there is an easier way to achieve this.

import tensorflow as tf
if tf.test.gpu_device_name():
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
    print("Please install GPU version of TF")

It usually prints like

Default GPU Device: /device:GPU:0

This seems easier to me rather than those verbose logs.

Edit:- This was tested for TF 1.x versions. I never had a chance to do stuff with TF 2.0 or above so keep in mind.

Ishan Bhatt
  • 9,287
  • 6
  • 23
  • 44
38

UPDATE FOR TENSORFLOW >= 2.1

The recommended way in which to check if TensorFlow is using GPU is the following:

tf.config.list_physical_devices('GPU') 

As of TensorFlow 2.1, tf.test.gpu_device_name() has been deprecated in favour of the aforementioned.

Then, in the terminal you can use nvidia-smi to check how much GPU memory has been alloted; at the same time, using watch -n K nvidia-smi would tell you for example every K seconds how much memory you are using (you may want to use K = 1 for real-time)

If you have multiple GPUs and you want to use multiple networks, each one on a separated GPU, you can use:

 with tf.device('/GPU:0'):
      neural_network_1 = initialize_network_1()
 with tf.device('/GPU:1'):
      neural_network_2 = initialize_network_2()
Timbus Calin
  • 13,809
  • 5
  • 41
  • 59
32

Ok, first launch an ipython shell from the terminal and import TensorFlow:

$ ipython --pylab
Python 3.6.5 |Anaconda custom (64-bit)| (default, Apr 29 2018, 16:14:56) 
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
Using matplotlib backend: Qt5Agg

In [1]: import tensorflow as tf

Now, we can watch the GPU memory usage in a console using the following command:

# realtime update for every 2s
$ watch -n 2 nvidia-smi

Since we've only imported TensorFlow but have not used any GPU yet, the usage stats will be:

tf non-gpu usage

Notice how the GPU memory usage is very less (~ 700MB); Sometimes the GPU memory usage might even be as low as 0 MB.


Now, let's load the GPU in our code. As indicated in tf documentation, do:

In [2]: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Now, the watch stats should show an updated GPU usage memory as below:

tf gpu-watch

Observe now how our Python process from the ipython shell is using ~ 7 GB of the GPU memory.


P.S. You can continue watching these stats as the code is running, to see how intense the GPU usage is over time.

kmario23
  • 57,311
  • 13
  • 161
  • 150
30

This will confirm that tensorflow using GPU while training also ?

Code

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Output

I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GT 730
major: 3 minor: 5 memoryClockRate (GHz) 0.9015
pciBusID 0000:01:00.0
Total memory: 1.98GiB
Free memory: 1.72GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 730, pci bus id: 0000:01:00.0)
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GT 730, pci bus id: 0000:01:00.0
I tensorflow/core/common_runtime/direct_session.cc:255] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GT 730, pci bus id: 0000:01:00.0
Nander Speerstra
  • 1,496
  • 6
  • 24
  • 29
himanshurobo
  • 409
  • 4
  • 4
  • 5
    Please add a little explanation to _why_ your answer is working (what does the `log_device_placement` do and how to see CPU vs. GPU in the output?). That will improve the quality of your answer! – Nander Speerstra Dec 06 '16 at 07:40
30

In addition to other answers, the following should help you to make sure that your version of tensorflow includes GPU support.

import tensorflow as tf
print(tf.test.is_built_with_cuda())
karaspd
  • 477
  • 4
  • 5
  • 13
    Warning: That tells you if TensorFlow is compiled with GPU. Not whether the GPU is being used. (If the drivers are not installed properly for example, then the CPU is used, even if "is_built_with_cuda()" is true.) – Ricardo Magalhães Cruz Sep 06 '18 at 21:18
19

This should give the list of devices available for Tensorflow (under Py-3.6):

tf = tf.Session(config=tf.ConfigProto(log_device_placement=True))
tf.list_devices()
# _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 268435456)
f0nzie
  • 1,086
  • 14
  • 17
14

I prefer to use nvidia-smi to monitor GPU usage. if it goes up significantly when you start you program, it's a strong sign that your tensorflow is using GPU.

scott huang
  • 2,478
  • 4
  • 21
  • 36
9

With the recent updates of Tensorflow, you can check it as follow :

tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None)

This will return True if GPU is being used by Tensorflow, and return False otherwise.

If you want device device_name you can type : tf.test.gpu_device_name(). Get more details from here

smerllo
  • 3,117
  • 1
  • 22
  • 37
9

With tensorflow 2.0 >=

import tensorflow as tf
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))

enter image description here

Timbus Calin
  • 13,809
  • 5
  • 41
  • 59
leplandelaville
  • 186
  • 1
  • 9
8

Run the following in Jupyter,

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

If you've set up your environment properly, you'll get the following output in the terminal where you ran "jupyter notebook",

2017-10-05 14:51:46.335323: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro K620, pci bus id: 0000:02:00.0)
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Quadro K620, pci bus id: 0000:02:00.0
2017-10-05 14:51:46.337418: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\direct_session.cc:265] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Quadro K620, pci bus id: 0000:02:00.0

You can see here I'm using TensorFlow with an Nvidia Quodro K620.

wafflecat
  • 1,144
  • 10
  • 15
8

I find just querying the gpu from the command line is easiest:

nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98                 Driver Version: 384.98                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  Off  | 00000000:02:00.0  On |                  N/A |
| 22%   33C    P8    13W / 250W |   5817MiB /  6075MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1060      G   /usr/lib/xorg/Xorg                            53MiB |
|    0     25177      C   python                                      5751MiB |
+-----------------------------------------------------------------------------+

if your learning is a background process the pid from jobs -p should match the pid from nvidia-smi

Tim
  • 395
  • 3
  • 11
7

For TF2.4+ listed as the "official" way on tensorflow website to check if TF is using GPU or Not

>>> import tensorflow as tf
>>> print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
Num GPUs Available:  2
Aryan
  • 1,093
  • 9
  • 22
6

You can check if you are currently using the GPU by running the following code:

import tensorflow as tf
tf.test.gpu_device_name()

If the output is '', it means you are using CPU only;
If the output is something like that /device:GPU:0, it means GPU works.


And use the following code to check which GPU you are using:

from tensorflow.python.client import device_lib 
device_lib.list_local_devices()
Hu Xixi
  • 1,799
  • 2
  • 21
  • 29
6

Put this near the top of your jupyter notebook. Comment out what you don't need.

# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())

# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0

# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))

NOTE: With the release of TensorFlow 2.0, Keras is now included as part of the TF API.

Originally answerwed here.

Paul Williams
  • 3,099
  • 38
  • 34
6
>>> import tensorflow as tf 
>>> tf.config.list_physical_devices('GPU')

2020-05-10 14:58:16.243814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-10 14:58:16.262675: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.263119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 6GB computeCapability: 6.1
coreClock: 1.7715GHz coreCount: 10 deviceMemorySize: 5.93GiB deviceMemoryBandwidth: 178.99GiB/s
2020-05-10 14:58:16.263143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-05-10 14:58:16.263188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-05-10 14:58:16.264289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-05-10 14:58:16.264495: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-05-10 14:58:16.265644: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-05-10 14:58:16.266329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-05-10 14:58:16.266357: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-10 14:58:16.266478: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.266823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-10 14:58:16.267107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

As suggested by @AmitaiIrron:

This section indicates that a gpu was found

2020-05-10 14:58:16.263119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:

pciBusID: 0000:01:00.0 name: GeForce GTX 1060 6GB computeCapability: 6.1
coreClock: 1.7715GHz coreCount: 10 deviceMemorySize: 5.93GiB deviceMemoryBandwidth: 178.99GiB/s

And here that it got added as an available physical device

2020-05-10 14:58:16.267107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
bLeDy
  • 81
  • 1
  • 5
5

The following will also return the name of your GPU devices.

import tensorflow as tf
tf.test.gpu_device_name()
Timbus Calin
  • 13,809
  • 5
  • 41
  • 59
Maz
  • 51
  • 1
  • 2
5

I found below snippet is very handy to test the gpu ..

Tensorflow 2.0 Test

import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
with tf.device('/gpu:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)

with tf.Session() as sess:
    print (sess.run(c))

Tensorflow 1 Test

import tensorflow as tf
with tf.device('/gpu:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)

with tf.Session() as sess:
    print (sess.run(c))
ajayramesh
  • 3,576
  • 8
  • 50
  • 75
5

For Tensorflow 2.0

import tensorflow as tf

tf.test.is_gpu_available(
    cuda_only=False,
    min_cuda_compute_capability=None
)

source here

other option is:

tf.config.experimental.list_physical_devices('GPU')
Community
  • 1
  • 1
ChaosPredictor
  • 3,777
  • 1
  • 36
  • 46
5

In the new versions of TF(>2.1) the recommended way for checking whether TF is using GPU is:

tf.config.list_physical_devices('GPU')
Aadil Srivastava
  • 609
  • 8
  • 12
4

Run this command in Jupyter or your IDE to check if Tensorflow is using a GPU or not: tf.config.list_physical_devices('GPU')

skulz00
  • 769
  • 8
  • 18
4

Tensorflow 2.1

A simple calculation that can be verified with nvidia-smi for memory usage on the GPU.

import tensorflow as tf 

c1 = []
n = 10

def matpow(M, n):
    if n < 1: #Abstract cases where n < 1
        return M
    else:
        return tf.matmul(M, matpow(M, n-1))

with tf.device('/gpu:0'):
    a = tf.Variable(tf.random.uniform(shape=(10000, 10000)), name="a")
    b = tf.Variable(tf.random.uniform(shape=(10000, 10000)), name="b")
    c1.append(matpow(a, n))
    c1.append(matpow(b, n))
cannin
  • 2,735
  • 2
  • 25
  • 32
3

This is the line I am using to list devices available to tf.session directly from bash:

python -c "import os; os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'; import tensorflow as tf; sess = tf.Session(); [print(x) for x in sess.list_devices()]; print(tf.__version__);"

It will print available devices and tensorflow version, for example:

_DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 268435456, 10588614393916958794)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 12320120782636586575)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 13378821206986992411)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:GPU:0, GPU, 32039954023, 12481654498215526877)
1.14.0
y.selivonchyk
  • 8,987
  • 8
  • 54
  • 77
3

You have some options to test whether GPU acceleration is being used by your TensorFlow installation.

You can type in the following commands in three different platforms.

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
  1. Jupyter Notebook - Check the console which is running the Jupyter Notebook. You will be able to see the GPU being used.
  2. Python Shell - You will be able to directly see the output. (Note- do not assign the output of the second command to the variable 'sess'; if that helps).
  3. Spyder - Type in the following command in the console.

    import tensorflow as tf tf.test.is_gpu_available()

1

If you are using TensorFlow 2.0, you can use this for loop to show the devices:

with tf.compat.v1.Session() as sess:
  devices = sess.list_devices()
devices
Doug
  • 143
  • 10
1

if you are using tensorflow 2.x use:

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))
Hari Krishnan
  • 2,049
  • 2
  • 18
  • 29
Cheptii
  • 29
  • 6
1

I found the most simple and comprehensive approach. Just set tf.debugging.set_log_device_placement(True) and you should see if ops are actually run on GPU e.g. Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:GPU:0

More in the docs: https://www.tensorflow.org/guide/gpu#logging_device_placement

w00dy
  • 748
  • 1
  • 6
  • 23
1

maybe try this:

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

to see if the system returns the tensor

according to the site

  • 1
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jan 06 '22 at 13:02