MyCode
#include<stdio.h>
int main(){
printf("result1 : %lf %d\n", (1 - (double)((int)1)), (1 - (double)((int)1)));
return 1;
}
Result
result1 : 0.000000, 1
I dont't understand this result.
#include<stdio.h>
int main(){
printf("result1 : %lf %d\n", (1 - (double)((int)1)), (1 - (double)((int)1)));
return 1;
}
result1 : 0.000000, 1
I dont't understand this result.
Refer to the printf
reference to find that the "%d"
format specifier expects an int
as parameter. Yet, you pass it a double
. This is undefined behavior, meaning anything can happen, including the result you get (for more details on what's likely happening, refer to eg. What happens to a float variable when %d is used in a printf?).
Instead, try adding a cast to int
:
printf("result1 : %lf %d\n", (1 - (double)((int)1)), (int) (1 - (double)((int)1)));
The type of the arguments passed to printf
have nothing inherently to do with the format string. It is your responsibility to make sure the types match up. In this case, you are passing two double
values. However, the format string is attempting to interpret the second one as an integer. This is undefined behavior.
While the behavior is undefined in the general case, it is likely that you are seeing the sign bit of the IEEE 754 double in a little-endian interpretation of an integer.