7

What libraries or functions need to be used for an objective comparison of CPU and GPU performance? What caveat should be warned for the sake of an accurate evaluation?

I using an Ubuntu platform with a device having compute capability 2.1 and working with the CUDA 5 toolkit.

Vitality
  • 20,705
  • 4
  • 108
  • 146
erogol
  • 13,156
  • 33
  • 101
  • 155
  • gettimeofday? Works for me. – Alex Apr 28 '13 at 00:34
  • for timing cuda activity, you might be interested in [this](http://stackoverflow.com/questions/13676102/strategies-for-timing-cuda-kernels-pros-and-cons/) You should be able to find plenty of references for timing CPU-only code. – Robert Crovella Apr 28 '13 at 00:47
  • @windfinder it works for cpu measure but how about GPU measure? – erogol Apr 28 '13 at 08:07
  • For GPU measurement you can try `nvprof`: [link](http://docs.nvidia.com/cuda/profiler-users-guide/index.html#nvprof-overview). – Yu Zhou Apr 28 '13 at 23:02

1 Answers1

6

I'm using the following

CPU - return microseconds between tic and toc with 2 microseconds of resolution

#include <sys/time.h>
#include <time.h>

struct timespec  init;
struct timespec  after;

void tic() { clock_gettime(CLOCK_MONOTONIC,&init); }

double toc() {
    clock_gettime(CLOCK_MONOTONIC,&after);
    double us = (after.tv_sec-init.tv_sec)*1000000.0f;
    return us+(after.tv_nsec- init.tv_nsec)/1000.0f;
}

GPU

float time;
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start, 0);

// Instructions

cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
cout << setprecision (10) << "GPU Time [ms] " << time << endl;

EDIT

For a more complete answer, please see Timing CUDA operations.

Community
  • 1
  • 1
Vitality
  • 20,705
  • 4
  • 108
  • 146