3

When using tensorflow I do not want to bother installing Cuda. Now, after installing the current version (2.4.1) with pip and running any code, I am getting a bunch of error messages

2021-02-22 18:03:10.286577: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-22 18:03:10.286603: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-22 18:03:11.427455: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-22 18:03:11.427576: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-02-22 18:03:11.427590: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-02-22 18:03:11.427607: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (XXX): /proc/driver/nvidia/version does not exist
2021-02-22 18:03:11.427774: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-22 18:03:11.427918: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set

These are generated by the following three lines of code

import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()

I am aware of the answer to a similar question at Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation

But the solution suggested there is raising the debug level by setting an environment variable. I do want to receive warnings about my programming inconsistencies though.

Is there really no way to disable these messages alone and make tensorflow accept that I do not want to use the GPU or do not have one?

1 Answers1

0

You can use below code to disable these informational,I and warning W messages.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'