Naturally, I've assumed the < and <= operators run at the same speed (per Jonathon Reinhart's logic, here). Recently, I decided to test that assumption, and I was a little surprised by my results.
I know, for most modern hardware, this question is purely academic, so had to write test programs that looped about 1 billion times (to get any minuscule difference to add up to more acceptable levels). The programs were as basic as possible (to cut out all possible sources of interference).
lt.c:
int main() {
for (int i = 0; i < 1000000001; i++);
return 0;
}
le.c:
int main() {
for (int i = 0; i <= 1000000000; i++);
return 0;
}
They were compiled and run on a Linux VirtualBox 3.19.0-18-generic #18-Ubuntu x86_64 installation, using GCC with the -std=c11 flag set.
The average time for lt.c's binary was:
real 0m2.404s
user 0m2.389s
sys 0m0.000s
The average time for le.c was:
real 0m2.397s
user 0m2.384s
sys 0m0.000s
The difference is small, but I couldn't get it to go away or reverse no matter how many times I ran the binaries.
- I made the comparison value in the for-loop of lt.c one larger than le.c (so they'd both loop the same number of times). Was this somehow a mistake?
- According the answer in Is < faster than <=?,
<
compiles tojge
and<=
compiles tojg
. That was dealing with if statements rather than a for-loop, but could this still be the reason? Could the execution ofjge
take slightly longer thanjg
? (I think this would be ironic since that would mean moving from C to ASM inverts which one is the more complicated instruction, with lt in C translating to gte in ASM and lte to gt.) - Or, is this just so hardware specific that different x86 lines or individual chips may consistently show the reverse trend, the same trend, or no difference?