1

I have installed Keras with gpu support in R based on Tensorflow with gpu support. This is installed with these steps:

https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781

If I run the Bosting housing example code from the book Deep learning with R, I receive this screen:

enter image description here

Can I conclude that the code runs on the GPU?

Or is this line from the picture above giving an error: GPU libraries are statically linked, skip dlopen check.

During running the code the GPU is running only on 3% of capacity while the CPU is running on 20-25%. The code is NOT running faster than while I initially did run the code without installing GPU support.

Thank you!

user2165379
  • 445
  • 4
  • 20
  • Though I don't know the cuda-question, the line about *"statically linked, skip dlopen check"* to me suggests just the method the libraries were created. A library/shared-object can be statically linked, meaning all dependent macros/functions/code is baked into the object (much larger); or dynamically linked, where the dependent functions from other shared objects/libraries are linked in at run-time (smaller). There are advantages to both. I believe that line may be a red herring in GPU-usage problem. – r2evans Nov 17 '19 at 19:27
  • @ r2evans Thank you! Good to know that this is a method and not an error. – user2165379 Nov 18 '19 at 10:55

1 Answers1

0

Yes, tensorflow is running with GPU enabled. Boston Housing is a relatively small dataset and probably does not benefit from using the GPU to a large degree. The lines below indicate it is running on the GPU. "Created tensorflow device (/job:localhost/replica:0/task:0device:GPU:0". enter image description here

From the guide at Tensorflow You can set tf.debugging.set_log_device_placement(True) in order to explicitly see where each operation is running. THE R equivalent is below.

library(tensorflow)
tf$debugging$set_log_device_placement(TRUE)

smingerson
  • 1,368
  • 9
  • 12
  • @ smingerson Thank you for your explanation! Good to know that is running on the GPU! I will try to run a more complex dataset to see if there is more time difference and use of GPU capacity in that case. I have tried to run the code from the Github example (with a,b and c), although it gives an error: Error in py_get_attr_impl(x, name, silent) : AttributeError: module 'tensorflow' has no attribute 'Session' – user2165379 Nov 18 '19 at 10:54
  • I think that the release of tf2 changed the API. Loading tensorflow myself I cannot find the relevant option. Try the mnist cnn example from the Rstudio TF site. This was large enough to show gpu usage. Keep in mind TF still uses the cpu because not all operations make sense or are implemented on GPU. Also see https://stackoverflow.com/questions/53887816/why-tensorflow-gpu-is-still-using-cpu. I think this will satisfactorily explain what you are seeing. – smingerson Nov 18 '19 at 13:28
  • @ smingerson Thank you. I will try and compare with the mnist example. That is an interesting link! – user2165379 Nov 18 '19 at 13:35
  • See updated answer. When i was checking for R equivalent before i wasnt on a device with tfgpu. Oops! – smingerson Nov 18 '19 at 13:53
  • @ smingerson Thanks! Now it says: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll – user2165379 Nov 18 '19 at 14:41