0

I have a simple DNN and I want to measure the GPU prediction time. I do not care about I/O events and data transfer and only care about the time model.predict() takes to complete on the GPU. I am using TensorFlow 2.5. I have tried using pythons time module, but I do not think that this is the correct way.

Is there a way I can measure that time?

gtsopus
  • 21
  • 1
  • 6
  • Why do you think `time` is not a correct way? – ihavenoidea Oct 13 '21 at 18:03
  • @ihavenoidea Because it takes into account the time to load the data into the GPU from the CPU and then the result from the GPU to the CPU. I only want to measure the GPU time not the total time. – gtsopus Oct 13 '21 at 18:12
  • 2
    As the CPU-GPU transfers tend to be asynchronous, it can be really hard to measure pure prediction time. For me the best option was to use NVIDIA Visual Profiler. It shows you all parallel calls etc. – kacpo1 Oct 13 '21 at 18:37

1 Answers1

0

This was answered in this post How do I get time of a Python program's execution? by newacct edited by Shidouuu.

There are plenty of useful answers there that might do exactly what you want. But either way, that's the solution I used and worked for me.

import time

start_time = time.clock()
main()
print(time.clock() - start_time, "seconds")

time.clock() returns the processor time, which allows us to calculate only the time used by this process (on Unix anyway). The documentation says "in any case, this is the function to use for benchmarking Python or timing algorithms"

Hugofac
  • 53
  • 8
  • Doesn't this also include the time of data transfer from the CPU to the GPU and vice versa? Like I said in my post I only want to measure GPU time, if it possible. – gtsopus Oct 13 '21 at 18:14
  • I believe that you are right, personally, I believe it will be negligible, but it does not solve your problem perfectly indeed. At least having only the execution time is still better than the competing time. – Hugofac Oct 13 '21 at 18:44