5

I am calling argon2 - memory intensive hashing function in Qt and measuring its running time:

...
QTime start = QTime::currentTime();
// call hashing function
QTime finish = QTime::currentTime();
time = start.msecsTo(finish) / 1000.0;
...

In argon2 library's test case, time is measured in another way:

...
clock_t start = clock();
// call hashing function
clock_t finish = clock();
time = ((double)finish - start) / CLOCKS_PER_SEC;
...

I am calling the function exactly as they call in their test case. But I am getting a twice bigger number (twice slower). Why? How to measure function running time in Qt? What clock() actually measures?

env:virtualBox, Ubuntu14.04 64bit, Qt5.2.1, Qt Creator 3.0.1.

Bobur
  • 545
  • 1
  • 6
  • 21
  • 1
    Possible duplicate of [c++ / Qt - computation time](http://stackoverflow.com/questions/9943439/c-qt-computation-time) – PsiX Dec 15 '16 at 10:35
  • 2
    Dunno what this "argon2" thing is but it doesn't look too adequate... The proper way to do it is to use `QElapsedTimer` which is trivial to use and as accurate as possible given your platform. – dtech Dec 15 '16 at 12:49
  • @PsiX edited. I wanted to know the cause of difference between times – Bobur Dec 16 '16 at 01:29
  • @Bobur Ok, so why did you accept an answer that doesn't answer that question? – PsiX Dec 16 '16 at 08:06
  • @PsiX, one suggested me to use QElapsedTimer, one suggested not to use clock(). And both explained their point. Now I know what to do. I would like to accept both answers but I cannot. But I'm wondering why are you asking this? What's wrong with this? – Bobur Dec 19 '16 at 01:41
  • @Bobur You should only accept answers which actually answer your question. Or update your question to fit more the answer you accepted. "But I am getting a twice bigger number (twice slower). Why?" You have the answer to that? The answer you accepted is already a duplicate to your second question ("How to measure function running time in Qt?"). You should clarify what your asking. – PsiX Dec 19 '16 at 09:50

2 Answers2

21

You could also try to use the QElapsedTimer:

QElapsedTimer timer;
timer.start();

slowOperation1();

qDebug() << "The slow operation took" << timer.elapsed() << "milliseconds";
qDebug() << "The slow operation took" << timer.nsecsElapsed() << "nanoseconds";

Documentation of QElapsed Timer

Tom Conijn
  • 291
  • 2
  • 8
1

clock() isn't accurate for measuring time spend in functions. It just returns number of ticks for whole program while its on CPU rightnow, it doesn't count blocking IO operations or sleeps. It just counts ticks which your program is running on CPU (processing). If you put sleep in your code you will loose CPU and this time isn't counting with clock(). You have to use time() or gettimeofday() or more accurate rdtsc assembly instruction.

Lookat these questions :

clock() accuracy

Why is CLOCKS_PER_SEC not the actual number of clocks per second?

In Qt sources, you will see the Qt has used gettimeofday for implementing QTime::currentTime() under Unix https://github.com/radekp/qt/blob/master/src/corelib/tools/qdatetime.cpp : line 1854

Community
  • 1
  • 1
e.jahandar
  • 1,715
  • 12
  • 30
  • That's largely untrue. It was true a long time ago when CPUs ran at a fixed frequency. Today they don't, and clock() is implemented in a different way, and is fairly accurate if you can settle with millisecond resolution. – dtech Dec 15 '16 at 12:45
  • @ddriver C says clock "returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. http://stackoverflow.com/a/9871772/4490542 – e.jahandar Dec 15 '16 at 12:49
  • @e.jahandar If I run two functions during say 1 second, first CPU intensive (makes a lot of computations) and another memory bound function (makes a lot of memory block read/write), and measure their time with clock(), do you mean that clock() time for CPU intensive function shows longer time than for the 2nd function even if actual running time is the same? – Bobur Dec 16 '16 at 01:51
  • @e.jahandar One more thing, You suggested "I have to use time() or gettimeofday()" but I am using QTime::currentTime() which uses (as you mentioned) gettimeofday(). Then it seems I doing the right thing, isn't it? – Bobur Dec 16 '16 at 02:02
  • And typo at the most end, it's the line 1845 – Bobur Dec 16 '16 at 02:05
  • @Bobur as you said, clock() doesn't cound non-cpu operations, like IO operations not memory, memory operations like cache misses are counted. QTime::curentTime() is right thing – e.jahandar Dec 16 '16 at 07:16