#include <stdio.h>
#include <stdarg.h>
#include <sys/time.h>
char kBuff[1024];
const char* kMsg = "0123456789 abcdefghizklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const long kThreshold = 100; //us
void func(const char* fmt, ...) {
struct timeval start, end;
gettimeofday(&start, NULL);
va_list ap;
va_start (ap, fmt);
vsnprintf(kBuff, sizeof(kBuff) - 1, fmt, ap);
va_end (ap);
gettimeofday(&end, NULL);
long during = (end.tv_sec - start.tv_sec) * 1000 * 1000 + (end.tv_usec - start.tv_usec);
if (during > kThreshold)
printf("%ld, ", during);
}
int main() {
long index = 0;
for(int i = 0; i < 1000000; i++) {
func("Index:%8ld Msg:%s", index++, kMsg);
}
}
I run a quite simple code for 10,000,000 times, and sometimes the running time of the specific piece of code is varies greatly, sometimes reaching 1000+us. The result is below:
105, 106, 135, 115, 121, 664, 135, 1024, 165, 130,
The program is running in a virtual machine of Ubuntu-18.04 on Windows-10
g++ -ggdb -O2 test.cpp
gettimeofday(&start, NULL);
va_list ap;
va_start (ap, fmt);
vsnprintf(kBuff, sizeof(kBuff) - 1, fmt, ap);
va_end (ap);
gettimeofday(&end, NULL);
The code above does not fall into kernel, and does not wait I/O, does not wait locks, so why the runing time can reaching 1000 us?
One reason I guessed was due to the scheduling of the operating system. but how do I prove this?
If this is the reason, then how can I accurately measure the running time of a program?