3

As stated in this question: LLVM and GCC, different output same code, LLVM and GCC result in different output for the same code.

#include <stdio.h>

#define MAX(a,b) ( (a) > (b) ? (a) : (b) )

int increment() {
    static int i = 42;
    i += 5;
    printf("increment returns %d\n",i);
    return i;
}

int main( int argc, char ** argv ) {
    int x = 50;
    printf("max of %d and %d is %d\n", x,increment(),MAX(x, increment()));
    printf("max of %d and %d is %d\n", x,increment(),MAX(x, increment()));
    return 0;
}

The LLVM output gives:

increment returns 47
increment returns 52
increment returns 57
max of 50 and 47 is 57
increment returns 62
increment returns 67
increment returns 72
max of 50 and 62 is 72

while the GCC output gives:

increment returns 47
increment returns 52
max of 50 and 52 is 50
increment returns 57
increment returns 62
increment returns 67
max of 50 and 67 is 62

Now, in the other question, someone answered that the order of evaluation of the parameters is not specified which is why the behavior is unspecified. However, if you go through the code very carefully, you can see that in fact the order of evaluation IS specified. The GCC compiler is evaluating x,increment(),MAX(x, increment()) from the printf() method right-to-left, while the LLVM compiler is evaluating this code left-to-right. Does anyone know why this is the case? Shouldn't something like the order of evaluation of printf be clearly defined and the same for both compilers??

Also, I just want to clarify that this code is from a tutorial and is aimed at understanding how the C language works. Its purpose is not to be a program that works properly or to have output that is accurate. The wacky output is intentional and so is the use of a silly macro like the one used in the code (MAX). I'm only trying to understand why there is such a big difference here, thanks!

Community
  • 1
  • 1
Julian D
  • 73
  • 7
  • Why do you think `printf` should be any different? – SK-logic Sep 04 '14 at 15:36
  • 2
    "Shouldn't order of evaluation be clearly defined", well, that is a matter of opinion. Leaving it unspecified opens up chances for optimization, even parallelization (though I don't know if any C compiler can do that). Also it allows old compilers to maintain backwards compatibility to versions prior to standardization, and legacy code written for them. OTOH it is yet another thing in C with which to shoot oneself in the foot. – hyde Sep 04 '14 at 15:46

1 Answers1

9

The evaluation order of function arguments isn't defined by the specification. Compilers are free to evaluate in whatever order they like. Just because a particular implementation happens to do it in a consistent way doesn't change the fact that two different implementations might have different behaviour.

If you want consistent output from different compilers, you need to write code that has well-defined behaviour according to the standard.

Carl Norum
  • 219,201
  • 40
  • 422
  • 469
  • Great thanks that clears it up a lot. But then why not just standardize everything? is it because of backwards compatibility issues? – Julian D Sep 04 '14 at 16:28
  • 4
    @JulianD: I suspect historically it goes back to the difference between architectures that grow the stack upwards vs architectures that grow the stack downwards; today, it's more about not over-specifying things so compilers are free to choose the most efficient variant – Christoph Sep 04 '14 at 16:42