When using a int to float implicit conversion, it fails with printf()
#include <stdio.h>
int main(int argc, char **argv) {
float s = 10.0;
printf("%f %f %f %f\n", s, 0, s, 0);
return 0;
}
when compiled with gcc -g scale.c -o scale
it does output garbage
./scale
10.000000 10.000000 10.000000 -5486124068793688683255936251187209270074392635932332070112001988456197381759672947165175699536362793613284725337872111744958183862744647903224103718245670299614498700710006264535590197791934024641512541262359795191593953928908168990292758500391456212260452596575509589842140073806143686060649302051520512.000000
If I explicitly cast the integer to float, or use a 0.0
(which is a double
) it works as designed.
#include <stdio.h>
int main(int argc, char **argv) {
float s = 10.0;
printf("%f %f %f %f\n", s, 0.0, s, 0.0);
return 0;
}
when compiled with gcc -g scale.c -o scale
it does output the expected output
./scale
10.000000 0.000000 10.000000 0.000000
What is happening ?
I'm using gcc (Debian 10.2.1-6) 10.2.1 20210110
if that's important.