48

I'm trying to compare GPU to CPU performance. For the NVIDIA GPU I've been using the cudaEvent_t types to get a very precise timing.

For the CPU I've been using the following code:

// Timers
clock_t start, stop;
float elapsedTime = 0;

// Capture the start time

start = clock();

// Do something here
.......

// Capture the stop time
stop = clock();
// Retrieve time elapsed in milliseconds
elapsedTime = (float)(stop - start) / (float)CLOCKS_PER_SEC * 1000.0f;

Apparently, that piece of code is only good if you're counting in seconds. Also, the results sometime come out quite strange.

Does anyone know of some way to create a high resolution timer in Linux?

Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
sj755
  • 3,944
  • 14
  • 59
  • 79
  • See this question: http://stackoverflow.com/questions/700392/high-resolution-timing-part-of-your-code – Steve-o Jul 19 '11 at 15:26

7 Answers7

66

Check out clock_gettime, which is a POSIX interface to high-resolution timers.

If, having read the manpage, you're left wondering about the difference between CLOCK_REALTIME and CLOCK_MONOTONIC, see Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?

See the following page for a complete example: http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/

#include <iostream>
#include <time.h>
using namespace std;

timespec diff(timespec start, timespec end);

int main()
{
    timespec time1, time2;
    int temp;
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
    for (int i = 0; i< 242000000; i++)
        temp+=temp;
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
    cout<<diff(time1,time2).tv_sec<<":"<<diff(time1,time2).tv_nsec<<endl;
    return 0;
}

timespec diff(timespec start, timespec end)
{
    timespec temp;
    if ((end.tv_nsec-start.tv_nsec)<0) {
        temp.tv_sec = end.tv_sec-start.tv_sec-1;
        temp.tv_nsec = 1000000000+end.tv_nsec-start.tv_nsec;
    } else {
        temp.tv_sec = end.tv_sec-start.tv_sec;
        temp.tv_nsec = end.tv_nsec-start.tv_nsec;
    }
    return temp;
}
h3ct0r
  • 705
  • 1
  • 11
  • 23
NPE
  • 486,780
  • 108
  • 951
  • 1,012
  • Just so I'm clear about what I've read, can you give me an example on how you would use clock_gettime to find the time elapsed in nanoseconds? – sj755 Jul 19 '11 at 16:05
  • @seljuq70: I've added a link to a complete example. – NPE Jul 19 '11 at 17:01
  • 8
    OP was posted C, but your answer is C++. Still useful, but not on my ZedBoard that has no C++ libs :D To fix, prefix the timespec with struct and strip out the couts. – Owl Aug 06 '18 at 10:32
  • 14
    so the answer explicitly speaks about `CLOCK_REALTIME` and `CLOCK_MONOTONIC` but we end up with `CLOCK_PROCESS_CPUTIME_ID` in the code sample? Can someone clear this up? What's the one to go? – itMaxence Aug 30 '18 at 10:05
  • @itMaxence Check this out: https://stackoverflow.com/a/3527632/9732482 – jplozier Jun 10 '21 at 13:48
  • my point was, speaking of a difference between A and B makes no sense if you end up having C in the sample code. (without speaking once about the `C` part) – itMaxence Jun 11 '21 at 14:09
  • According to this linked man page, `CLOCK_PROCESS_CPUTIME_ID` measures time "consumed by this process". Which depending on the use case may not actually be what you want. If the process is stalling due to blocking syscalls or whatever then I would like to measure that too. – jacwah Dec 10 '21 at 10:58
22

To summarise information presented so far, these are the two functions required for typical applications.

#include <time.h>

// call this function to start a nanosecond-resolution timer
struct timespec timer_start(){
    struct timespec start_time;
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start_time);
    return start_time;
}

// call this function to end a timer, returning nanoseconds elapsed as a long
long timer_end(struct timespec start_time){
    struct timespec end_time;
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end_time);
    long diffInNanos = (end_time.tv_sec - start_time.tv_sec) * (long)1e9 + (end_time.tv_nsec - start_time.tv_nsec);
    return diffInNanos;
}

Here is an example of how to use them in timing how long it takes to calculate the variance of a list of input.

struct timespec vartime = timer_start();  // begin a timer called 'vartime'
double variance = var(input, MAXLEN);  // perform the task we want to time
long time_elapsed_nanos = timer_end(vartime);
printf("Variance = %f, Time taken (nanoseconds): %ld\n", variance, time_elapsed_nanos);
damian
  • 3,604
  • 1
  • 27
  • 46
Alex
  • 18,332
  • 10
  • 49
  • 53
  • 7
    Aren't you ignoring the `tv_sec` of the `timespec`? Also, why `CLOCK_PROCESS_CPUTIME_ID` rather than `CLOCK_MONOTONIC`? – amaurea Feb 28 '14 at 17:37
  • The poster is comparing CPU to GPU performance. You are honestly giving code getting the CPU time. CLOCK_PROCESS_CPUTIME_ID. This means he will get speed ups many orders of magnitude. For CPU/GPU performance (this question) always use wall time. Remove this answer. – TimZaman Jun 22 '16 at 12:15
  • 1
    @TimZaman Yep, realtime might be better in the poster's use case. I'm not going to take down an answer though, obviously people have found it useful. Cheers. – Alex Jun 23 '16 at 08:29
  • Before using CLOCK_PROGRESS_CPUTIME_ID you shoud run `grep constant_tsc /proc/cpuinfo` to understand how this clock works. If you CPU does not support *constant_tsc*, the time reflects actual CPU clock cycles. If the flag is set, the clock is adjusted to account for current CPU frequency. I give this a -1 because *time_elapsed_nanos* is incorrectly calculated. [This](https://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/) may be a better approach. – fredk Aug 16 '16 at 14:41
1

After reading this thread I started testing the code for clock_gettime against c++11's chrono and they don't seem to match.

There is a huge gap between them!

The std::chrono::seconds(1) seems to be equivalent to ~70,000 of the clock_gettime

#include <ctime>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <thread>
#include <chrono>
#include <iomanip>
#include <vector>
#include <mutex>

timespec diff(timespec start, timespec end);
timespec get_cpu_now_time();
std::vector<timespec> get_start_end_pairs();
std::vector<timespec> get_start_end_pairs2();
void output_deltas(const std::vector<timespec> &start_end_pairs);

//=============================================================
int main()
{
    std::cout << "Hello waiter" << std::endl; // flush is intentional
    std::vector<timespec> start_end_pairs = get_start_end_pairs2();
    output_deltas(start_end_pairs);

    return EXIT_SUCCESS;
}

//=============================================================
std::vector<timespec> get_start_end_pairs()
{
    std::vector<timespec> start_end_pairs;
    for (int i = 0; i < 20; ++i)
    {
        start_end_pairs.push_back(get_cpu_now_time());
        std::this_thread::sleep_for(std::chrono::seconds(1));
        start_end_pairs.push_back(get_cpu_now_time());
    }

    return start_end_pairs;
}


//=============================================================
std::vector<timespec> get_start_end_pairs2()
{
    std::mutex mu;
    std::vector<std::thread> workers;
    std::vector<timespec> start_end_pairs;
    for (int i = 0; i < 20; ++i) {
        workers.emplace_back([&]()->void {
            auto start_time = get_cpu_now_time();
            std::this_thread::sleep_for(std::chrono::seconds(1));
            auto end_time = get_cpu_now_time();
            std::lock_guard<std::mutex> locker(mu);
            start_end_pairs.emplace_back(start_time);
            start_end_pairs.emplace_back(end_time);
        });
    }

    for (auto &worker: workers) {
        if (worker.joinable()) {
            worker.join();
        }
    }

    return start_end_pairs;
}

//=============================================================
void output_deltas(const std::vector<timespec> &start_end_pairs)
{
    std::cout << "size: " << start_end_pairs.size() << std::endl;
    for (auto it_start = start_end_pairs.begin(); it_start < start_end_pairs.end(); it_start += 2)
    {
        auto it_end = it_start + 1;
        auto delta = diff(*it_start, *it_end);

        std::cout
                << std::setw(2)
                << std::setfill(' ')
                << std::distance(start_end_pairs.begin(), it_start) / 2
                << " Waited ("
                << delta.tv_sec
                << "\ts\t"
                << std::setw(9)
                << std::setfill('0')
                << delta.tv_nsec
                << "\tns)"
                << std::endl;
    }
}

//=============================================================
timespec diff(timespec start, timespec end)
{
    timespec temp;
    temp.tv_sec = end.tv_sec-start.tv_sec;
    temp.tv_nsec = end.tv_nsec-start.tv_nsec;

    if (temp.tv_nsec < 0) {
        --temp.tv_sec;
        temp.tv_nsec += 1000000000;
    }
    return temp;
}

//=============================================================
timespec get_cpu_now_time()
{
    timespec now_time;
    memset(&now_time, 0, sizeof(timespec));
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &now_time);

    return now_time;
}

output:

Hello waiter
 0 Waited (0    s       000843254       ns)
 1 Waited (0    s       000681141       ns)
 2 Waited (0    s       000685119       ns)
 3 Waited (0    s       000674252       ns)
 4 Waited (0    s       000714877       ns)
 5 Waited (0    s       000624202       ns)
 6 Waited (0    s       000746091       ns)
 7 Waited (0    s       000575267       ns)
 8 Waited (0    s       000860157       ns)
 9 Waited (0    s       000827479       ns)
10 Waited (0    s       000612959       ns)
11 Waited (0    s       000534818       ns)
12 Waited (0    s       000553728       ns)
13 Waited (0    s       000586501       ns)
14 Waited (0    s       000627116       ns)
15 Waited (0    s       000616725       ns)
16 Waited (0    s       000616507       ns)
17 Waited (0    s       000641251       ns)
18 Waited (0    s       000683380       ns)
19 Waited (0    s       000850205       ns)
radato
  • 840
  • 9
  • 27
  • I guess ++temp.tv_sec; is a type and you meant --temp.tv_sec; in the diff function. – Simone-Cu Mar 21 '22 at 17:27
  • it is not a type, when I subtract the 2 structs, I take into account that there might be a **carry over** – radato Mar 22 '22 at 08:53
  • Yes, understood that. But when you do the carry from sec to nsec, you should subtract 1 tothe seconds field and sum 1000000000 (1s) to the nsec field. Let's say `(10s and 900ns) - (5s and 1000ns)` --> `5s and -100ns` --> `4s and (-100+10^9)ns`. The last step decreases the sec, thus doing the carry. – Simone-Cu Mar 23 '22 at 15:42
  • Yes correct, I fixed the answer accordingly – radato Mar 27 '22 at 05:11
1
struct timespec t;
clock_gettime(CLOCK_REALTIME, &t);

there is also CLOCK_REALTIME_HR, but I'm not sure whether it makes any difference..

Karoly Horvath
  • 94,607
  • 11
  • 117
  • 176
  • And I am not sure if `CLOCK_REALTIME_HR` is suported. [Question](https://stackoverflow.com/questions/46094769/clock-realtime-nanosecond-precision-support-in-kernel). – gsamaras Sep 07 '17 at 11:13
1

Are you interested in wall time (how much time actually elapses) or cycle count (how many cycles)? In the first case, you should use something like gettimeofday.

The highest resolution timer uses the RDTSC x86 assembly instruction. However, this measures clock ticks, so you should be sure that power saving mode is disabled.

The wiki page for TSC gives a few examples: http://en.wikipedia.org/wiki/Time_Stamp_Counter

Foo Bah
  • 25,660
  • 5
  • 55
  • 79
  • On a modern CPU, `rdtsc` correlates 1:1 with wall-clock time, not core clock cycles. It doesn't pause when your process (or the whole CPU) is sleeping, and it runs at constant frequency regardless of turbo / power-saving. Use performance counters to measure actual core clock cycles. e.g. `perf stat awk 'BEGIN {for (i=0 ; i<10000000; i++){}}'`. – Peter Cordes Jul 16 '17 at 01:10
  • I am actually interested in wall time. Your reply hit the spot! – radato Jan 09 '19 at 08:45
  • Is it possible to link your reply to my original comment? – radato Jan 09 '19 at 09:48
0

epoll implemention: https://github.com/ielife/simple-timer-for-c-language

use like this:

timer_server_handle_t *timer_handle = timer_server_init(1024);
if (NULL == timer_handle) {
    fprintf(stderr, "timer_server_init failed\n");
    return -1;
}
ctimer timer1;
    timer1.count_ = 3;
    timer1.timer_internal_ = 0.5;
    timer1.timer_cb_ = timer_cb1;
    int *user_data1 = (int *)malloc(sizeof(int));
    *user_data1 = 100;
    timer1.user_data_ = user_data1;
    timer_server_addtimer(timer_handle, &timer1);

    ctimer timer2;
    timer2.count_ = -1;
    timer2.timer_internal_ = 0.5;
    timer2.timer_cb_ = timer_cb2;
    int *user_data2 = (int *)malloc(sizeof(int));
    *user_data2 = 10;
    timer2.user_data_ = user_data2;
    timer_server_addtimer(timer_handle, &timer2);

    sleep(10);

    timer_server_deltimer(timer_handle, timer1.fd);
    timer_server_deltimer(timer_handle, timer2.fd);
    timer_server_uninit(timer_handle);
Kevin Lee
  • 41
  • 2
0

clock_gettime(2)

Nikolai Fetissov
  • 82,306
  • 11
  • 110
  • 171