I am currently writing an application using C and CUDA. I have the algorithm working in pure C and converted it to CUDA.
The results are fine, and I am now in the process where of optimizing my code.
I profile the time that the kernel takes to get the results back, using a simple
clock_t start, end;
double cpu_time_used;
start = clock();
. . . my memcopies and my kernel . . .
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
What surprises me is that the processing time is dramatically reduced when I run the whole program several times in a row. When run it only once I average around 0.9 seconds. Running it ten times in a row I can go down to 0.1 seconds.
My real concern is that the Visual Profiler calculates its statistics based on 15 runs, which makes my first run to be overwhelmed by the really fast 14 other runs.
My program will later be ran once in a while, so what I want to optimize is the time of the first run.
My question is thus, is there a way to solve this, or know where it comes from ?
Thx !
EDIT:
I am running Windows 7, CUDA 4.2 Toolkit (2.1 capability) on a netbook