1

I am currently writing an application using C and CUDA. I have the algorithm working in pure C and converted it to CUDA.

The results are fine, and I am now in the process where of optimizing my code.

I profile the time that the kernel takes to get the results back, using a simple

clock_t start, end;
double cpu_time_used;
start = clock();

. . . my memcopies and my kernel . . . 

end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;

What surprises me is that the processing time is dramatically reduced when I run the whole program several times in a row. When run it only once I average around 0.9 seconds. Running it ten times in a row I can go down to 0.1 seconds.

My real concern is that the Visual Profiler calculates its statistics based on 15 runs, which makes my first run to be overwhelmed by the really fast 14 other runs.

My program will later be ran once in a while, so what I want to optimize is the time of the first run.

My question is thus, is there a way to solve this, or know where it comes from ?

Thx !

EDIT:

I am running Windows 7, CUDA 4.2 Toolkit (2.1 capability) on a netbook

jlengrand
  • 12,152
  • 14
  • 57
  • 87
  • 3
    instead of performing your task once per run; do it hundreds of times in a loop if feasible. Clock it and observe the average throughput. I guess Memory caching... – जलजनक Oct 15 '12 at 14:28
  • random guess, but if you're running on a modern OS, they have really good caching systems (eg TurboBoost). I'm just grasping at straws here, but I'd try turning off any of those if possible (also I'm assuming your program completely clears from memory between runs) – im so confused Oct 15 '12 at 14:29
  • @SparKot's solution is better – im so confused Oct 15 '12 at 14:29
  • 1
    Linux or windows? do you have persistence mode set in the driver? use nvidia-smi – Robert Crovella Oct 15 '12 at 14:30
  • @SparKot Yep, this is indeed the kind of thing I am afraid of. I would like to avoid that entering my profiling. . . I am also looking at CUDA Context creations, which would be created on the first run but not afterwards – jlengrand Oct 15 '12 at 14:44
  • 4
    The first memory allocation in the GPU and the first kernel call have an overhead due to initialization purposes. What people usually do is create a dummy kernel, for example, initialize an array, and then get the time of their whole application. – pQB Oct 15 '12 at 15:01
  • To eliminate the initialization cost of a CUDA device call cudaThreadSynchronize() or cudaFree(0) at the start of your application, or before starting the timer.. http://stackoverflow.com/questions/11704681/cuda-cutil-timer-confusion-on-elapsed-time – phoad Oct 15 '12 at 16:13
  • 1
    This isn't uncommon in benchmarking generally; often there's some sort of initialization (whether it's loading cache, JIT-ing code, initializing a library, spinning up disk, what have you) which takes a finite amount of time which weighs down the first run of a timing and largely goes away afterwards. You'll often hear the terms "cold start" or "warm start" to distinguish benchmarks which incur or don't incur the cost of that initialization, respectively. Using either timing is defensible, depending on what's relevant for your case, as long as you're clear about what you're timing. – Jonathan Dursi Oct 15 '12 at 16:32

1 Answers1

2

If your objective is to provide for a quick startup, make sure your executable contains object code for the GPU architecture it will be run on.

You can compile a "fat binary" with object code for several architecture, where a suitable version of the code selected at runtime. You can even (and should!) include PTX code as well in case none of the object code versions is suitable (e.g. to support future devices).

Just supply multiple -gencode options to nvcc, one for each physical architecture ("sm_20") you want to include object code for, as well as at least one with a virtual architecture ("compute_20"), that generates PTX code.

tera
  • 7,080
  • 1
  • 21
  • 32