As your specifiers don't match the actual values, you invoke undefined behaviour. That means just anything can happen and, especially, the behaviour could change between invocations of the program (if the format specifiers read more data than actually provided) or at least between compiler settings.
What internally probably happens depends on many factors, such as the length of int values and much more. In any case, it is something you cannot rely on.
What really happens here is: float
s are automatically promoted to double
when being passed to a variadic function, changing their length to 8 bytes.
I modified your program in this way:
#include <stdio.h>
void main(){
float a=3;
int b=5;
printf("%08x %08x %08x\n", a, b);
printf("%08x %08x %08x\n", b, a);
printf("%d %d %d\n", a, b);
printf("%d %d %d\n", b, a);
}
which gives the output
00000000 40080000 00000005
00000005 00000000 40080000
0 1074266112 5
5 0 1074266112
So we exactly see the values resp. bytes being passed via the stack to printf()
. As these values are swapped due to the endianness (they are visually swapped when interpreted via %08x
), I really have the bytes
00 00 00 00 00 00 08 40 05 00 00 00
05 00 00 00 00 00 00 00 00 00 08 40
If we now use the wrong specifiers, we get the mapping
00 00 00 00 -> 00000000 -> 0
00 00 08 40 -> 40080000 -> 1074266112
05 00 00 00 -> 00000005 -> 5
which is then output.
If I omit one %d
, the respectively last value is omitted as well, leading to
0 1074266112
5 0
in turn.
So the reason why your b
value seems to change is that in the first case, you really get the "other" part of your a
value.