24

Can anybody explain me how the [.precision] in printf works with specifier "%g"? I'm quite confused by the following output:

double value = 3122.55;
printf("%.16g\n", value); //output: 3122.55
printf("%.17g\n", value); //output: 3122.5500000000002

I've learned that %g uses the shortest representation.

But the following outputs still confuse me

printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
printf("%.17e\n", value); //output: 3.12255000000000018e+03
printf("%.17f\n", value); //output: 3122.55000000000018190

My question is: why %.16g gives the exact number while %.17g can't?

It seems 16 significant digits can be accurate. Could anyone tell me the reason?

cssmlulu
  • 337
  • 1
  • 5
  • 11

4 Answers4

20

%g uses the shortest representation.

Floating-point numbers usually aren't stored as a number in base 10, but 2 (performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.

When you specify %.16g, you're saying that you want the shortest representation of the number given with a maximum of 16 significant digits.

If the shortest representation has more than 16 digits, printf will shorten the number string by cutting cut the 2 digit at the very end, leaving you with 3122.550000000000, which is actually 3122.55 in the shortest form, explaining the result you obtained.

In general, %g will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.

To further the example, when you use %.17g and the 17th decimal place contains a value different from 0 (2 in particular), you ended up with the full number 3122.5500000000002.

My question is: why %.16g gives the exact number while %.17g can't?

It's actually the %.17g which gives you the exact result, while %.16g gives you only a rounded approximate with an error (when compared to the value in memory).

If you want a more fixed precision, use %f or %F instead.

Roberto Caboni
  • 7,252
  • 10
  • 25
  • 39
user35443
  • 6,309
  • 12
  • 52
  • 75
  • I know there is rouding errors when presenting a float. Which confused me is that it seems the first 16 significant digits is actually what I want with out rouding error. Why? – cssmlulu Jun 05 '15 at 05:46
  • @cssmlulu The error is at the `17`th. `%.16g` says that you care only about the first `16` digits, and the last one is ignored. When having a lot of zeroes (not significant digits) at the end, `%g` cuts them off, making it just as short as `3122.55`. – user35443 Jun 05 '15 at 05:50
  • 2
    @cssmlulu In addition to @user35443's comment, `%g` uses the number of significant digits as its precision specifier. It is more explained here: [What is the difference between %g and %f in C?](http://stackoverflow.com/a/5913115/4895040). One of significant figures rule states that [ALL zeroes between non-zero numbers are ALWAYS significant.](http://www.usca.edu/chemistry/genchem/sigfig.htm), which will apply if you use `%.17g` on `3122.5500000000002` since zeroes between `5` and `2` are considered significant. That's why those zeroes are included. – raymelfrancisco Jun 05 '15 at 05:53
  • It doesn't really undermine the main thrust of the argument here, but note that your claim that *"`%g` uses the shortest representation."* is wrong. See https://stackoverflow.com/q/54162152/1709587, where I rebut precisely that claim. – Mark Amery Jan 12 '19 at 18:14
  • No, the `%.17g` does not give the exact representation. In fact, there is no way to store `3122.55` in a float exactly. The reality is that that number is stored as (`printf '%a\n' 3122.55`) `0x1.865199999999ap+11` which is mathematically exactly equal to `3122.550000000000181898940354585647583007812500`. However, it makes no sense to talk about "mathematically exact" of an approximation (rounded by ieee754). –  Sep 25 '21 at 17:40
7

The decimal value 3122.55 can't be exactly represented in binary floating point. When you write

double value = 3122.55;

you end up with the closest possible value that can be exactly represented. As it happens, that value is exactly 3122.5500000000001818989403545856475830078125.

That value to 16 significant figures is 3122.550000000000. To 17 significant figures, it's 3122.5500000000002. And so those are the representations that %.16g and %.17g give you.

Note that the nearest double representation of a decimal number is guaranteed to be accurate to at least 15 decimal significant figures. That's why you need to print to 16 or 17 digits to start seeing these apparent inaccuracies in your output in this case - to any smaller number of significant figures, the double representation is guaranteed to match the original decimal number that you typed.

One final note: you say that

I've learned that %g uses the shortest representation.

While this is a popular summary of how %g behaves, it's also wrong. See What precisely does the %g printf specifier mean? where I discuss this at length, and show an example of %g using scientific notation even though it's 4 characters longer than not using scientific notation would've been.

Mark Amery
  • 143,130
  • 81
  • 406
  • 459
2

The decimal representation 3122.55 cannot be exactly represented by binary floating point representation.

A double precision binary floating point value can represent approximately 15 significant figures (note not decimal places) of a decimal value correctly; thereafter the digits may not be the same, and at the extremes do not even have any real meaning and will be an artefact of the conversion from the floating point representation to a string of decimal digits.

I've learned that %g uses the shortest representation.

The rule is:

Where P is the precision (or 6 if no precision specified or 1 if precision is zero), and X is the decimal exponent required for E/e style notation then:

  • if P > X ≥ −4, the conversion is with style f or F and precision P − 1 − X.
  • otherwise, the conversion is with style e or E and precision P − 1.

The modification of precision for %g results in the different output of:

printf("%.16g\n", value); //output: 3122.55
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819

despite having the same precision in the format specifier.

Clifford
  • 88,407
  • 13
  • 85
  • 165
0

The exact representation in memory of the decimal 3122.55 is a binary fraction with a 53 bits mantissa.

printf("%a\n", value);     // output 0x1.865199999999ap+11

And the exact conversion back to a decimal is:

printf("%.45f\n", value);  // output 3122.550000000000181898940354585647583007812500000

If you cut that number at 17 digits you get:

printf("%.17g\n", value);  // output 3122.5500000000002

And at 16 digits, all the trailing digits are 0 and could safely be erased (which the g format does automatically by default) to get:

printf("%.16g\n", value);  // output 3122.55

That is why you get back the original decimal number.