0

I have a hybrid graphic laptop and I'm using Windows 7, I'm intending to run a C++ code containing also cuda,when it's GPU's turn to start up, my nvidia GPU takes some seconds to just start up,and again I guess it takes some other seconds to warm up, is there any way to start up GPU just after running a code(for example in first line of main() function)?

Thanks in advanced.

Alexander1991
  • 217
  • 5
  • 13
  • Mostly the warm-up issue is because the GPU needs to setup a context with CPU. More on CUDA context setup - http://stackoverflow.com/questions/10415204/how-to-create-a-cuda-context – Divakar Feb 26 '14 at 14:58
  • 1
    This "warm-up" delay is primarily due to loading the CUDA dll's into memory. That's why Robert suggested calling a cudaFree(0) early in your program's life - to force the loading of the large CUDA dll's at program startup. – Paul May 19 '14 at 21:15

2 Answers2

3

I propose a better option. Since you are using Windows, have you considered forcing the program to start with the NVIDIA GPU? If you are using NVIDIA Optimus (a laptop without a physical switch for switching graphic cards) try this:

Right click on the program that uses CUDA, in the context menu, there is an option named "Run with graphics processor" and under there choose "High-performance NVIDIA processor". Note that if you are using Visual Studio, you could start Visual Studio with this method. This way, your non-CUDA card will not be visible to the program :-)

Based on my experience if you are trying to do CUDA/OpenGL interop, without this method you will have problems. Sometimes the OpenGL context is created for your non-NVIDIA card and the CUDA context is created on the NVIDIA card, leading to bizarre errors.

Maghoumi
  • 3,295
  • 3
  • 33
  • 49
1

Try putting a:

cudaSetDevice(0);

as the first line of your main function. Or you could also try using:

cudaFree(0);
Robert Crovella
  • 143,785
  • 11
  • 213
  • 257