Or in other words: Could a wrong printf
/ fprintf
decimal integer (%d
, %u
, %ld
, %lld
) format string cause a program to crash or lead to undefined behavior?
Cosinder following lines of code:
#include <iostream>
#include <cstdio>
int main() {
std::cout << sizeof(int) << std::endl
<< sizeof(long) << std::endl;
long a = 10;
long b = 20;
std::printf("%d, %d\n", a, b);
return 0;
}
Result on 32 bit architecture:
4
4
10, 20
Result on 64 bit architecture:
4
8
10, 20
In any case the program prints the expected result. I know, if the long
value exceeds the int
range, the program prints wrong numbers – which is ugly, but doesn't effect the main purpose of the program –, but beside this, could anything unexpected happen?