Can anybody explain me how the [.precision]
in printf works with specifier "%g"? I'm quite confused by the following output:
double value = 3122.55;
printf("%.16g\n", value); //output: 3122.55
printf("%.17g\n", value); //output: 3122.5500000000002
I've learned that %g
uses the shortest representation.
But the following outputs still confuse me
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
printf("%.17e\n", value); //output: 3.12255000000000018e+03
printf("%.17f\n", value); //output: 3122.55000000000018190
My question is: why %.16g
gives the exact number while %.17g
can't?
It seems 16 significant digits can be accurate. Could anyone tell me the reason?