To the best of my understanding, on Intel x86 (e.g., Ice Lake micro-architecture) I could expect AND
for two unsigned integers to be faster than IDIV
for the same two integers. However, if I write a program to really measure the time, it is hard to spot the difference.
To measure time, I use time.h
, and the code is basically as follows:
integer_A = rand();
integer_B = rand();
start = clock();
for (size_t i=0; i<(1<<26); i++)
integer_A &[or % to get IDIV] integer_B
end = clock();
elapsed_time = (end - start) / CLOCKS_PER_SEC;
How could I better reproduce the measurement to get a result that would prove that AND
is faster than IDIV
(if that is the case)?
I understand that time.h
measurements are imperfect. But what is the best I can do within a program that anyone can run on their laptops to prove that AND
is faster than IDIV
?