63

I have this code in C where I've declared 0.1 as double.

#include <stdio.h> 
int main() {
    double a = 0.1;

    printf("a is %0.56f\n", a);
    return 0;
}

This is what it prints, a is 0.10000000000000001000000000000000000000000000000000000000

Same code in C++,

#include <iostream>
using namespace std;
int main() {
    double a = 0.1;

    printf("a is %0.56f\n", a);
    return 0;
}

This is what it prints, a is 0.1000000000000000055511151231257827021181583404541015625

What is the difference? When I read both are alloted 8 bytes? How does C++ print more numbers in the decimal places?

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

Glorfindel
  • 21,988
  • 13
  • 81
  • 109
  • 11
    Your C++ example seems to be missing include for the `printf`. – user694733 Oct 05 '18 at 07:34
  • 7
    I think the question is rather why gcc and g++ give different results? They shouldn't. – Lundin Oct 05 '18 at 07:35
  • Tell us what compilers you are using, and what compilation options do you use for them. In other words, we need [mcve]. – user694733 Oct 05 '18 at 07:37
  • 8
    To use `printf` you need to include ``. – Cheers and hth. - Alf Oct 05 '18 at 07:45
  • 6
    @user694733 This is a MCVE. Compile with for example `gcc -std=c11 -pedantic-errors` and `g++ -std=c++11 -pedantic-errors`. I'm able to reproduce the behavior on Mingw. – Lundin Oct 05 '18 at 07:45
  • @Cheersandhth.-Alf Umm yeah that's also a good question. Why goes g++ allow the C++ snippet to compile in pedantic mode? – Lundin Oct 05 '18 at 07:47
  • 1
    g++/gcc (GCC) 8.2.1 both give `0.10000000000000000555111512312578270211815834045410156250` so what is displayed beyond the 15-17 significant digit capability of the floating point type looks to be implementation defined. – David C. Rankin Oct 05 '18 at 07:49
  • 4
    15 decimal digits of precision does not mean that the numbers you can represent have at most 15 decimal digits. For instance, the value of bit 50 is exactly 4.44089209850062616169452667236328125E-16. – molbdnilo Oct 05 '18 at 07:49
  • 1
    @Lundin: C++ allows a standard library header to drag in other headers, C does not. Thanks. I'm adding that to my answer. – Cheers and hth. - Alf Oct 05 '18 at 07:52
  • 2
    In my tests, after fixing `` to ``, changing between `-std=c++11` and `-std=gnu++11` also made a difference in output. – user694733 Oct 05 '18 at 08:09
  • cannot reproduce it myself, but one difference could be that g++ quietly includes the math library, would "gcc .. -lm" change the result? –  Oct 05 '18 at 08:54
  • @jakub_d: No difference with MinGW gcc 7.3.0. – Cheers and hth. - Alf Oct 05 '18 at 09:19
  • IEEE754 binary64 has 53 digits of precision, unless the number is a denormal. It's just that 52 of them are explicitly stored. – Ruslan Oct 05 '18 at 12:51
  • @Cheersandhth.-Alf: In C++, shouldn't that be instead of ? – Rudy Velthuis Oct 05 '18 at 13:15
  • @Ruslan: 53 *binary* digits (bits) of precision, not 53 decimal digits. In the context of floating point numbers, if people talk about digits of precision, they usually mean *decimal* digits. – Rudy Velthuis Oct 05 '18 at 13:18
  • FWIW, 52 or 53 bits of precision, or 16-17 decimal digits of precision do not mean the exact value can't have many more digits. That is because in decimal, the 50th bit has a value of 1/2^52 = 000000000000000444089209850062616169452667236328125, i.e. more than 17 singificant digits. Combine that with other values, and you get lots of digits. Use a very negative exponent (negative powers of 2), and you get many more than 50 or 100. This is due to how binary fractions are converted to decimal fractions... – Rudy Velthuis Oct 05 '18 at 13:34
  • 1
    The *double* value with hex representation `0x3010000000000002` has the **exact** value, if represented in decimal, of `3.454467422037779384246248650659707670875136078985650983764329044145425818624069944158995101925120434404807679244694952058664153682444814042435767268547887037222881378197807344720457643706852668419315932624158449470996856689453125e-77`. The exact decimal representation of `0x0010000000000002` is far too long to print it in a comment that can only contain 600 characters. – Rudy Velthuis Oct 05 '18 at 13:38
  • @RudyVelthuis right. I was assuming binary since we're talking about representations of floating-point numbers. Anyway, that's 15 (`numeric_limits::digits10`) to 17 (`numeric_limits::max_digits10`) decimal digits, depending on the direction of guaranteed round-tripping you need and actual value you want to approximate. – Ruslan Oct 05 '18 at 14:20
  • @Ruslan: 15-17 sounds about right. https://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/ – Rudy Velthuis Oct 05 '18 at 15:55
  • 1
    @DavidC.Rankin: What gcc 8.2 is printing here happens to be the **exact** decimal representation of the value stored in `a`, i.e. the `double` closest to 0.1. – Edgar Bonet Oct 06 '18 at 18:23

2 Answers2

81

With MinGW g++ (and gcc) 7.3.0 your results are reproduced exactly.

This is a pretty weird case of Undefined Behavior.

The Undefined Behavior is due to using printf without including an appropriate header, ¹violating the “shall” in

C++17 §20.5.2.2

A translation unit shall include a header only outside of any declaration or definition, and shall include the header lexically before the first reference in that translation unit to any of the entities declared in that header. No diagnostic is required.

In the C++ code change <iostream> to <stdio.h>, to get valid C++ code, and you get the same result as with the C program.


Why does the C++ code even compile?

Well, unlike C, in C++ a standard library header is allowed to drag in any other header. And evidently with g++ the <iostream> header drags in some declaration of printf. Just not an entirely correct one.

Details: With MinGW g++ 7.3.0 the declaration/definition of printf depends on the macro symbol __USE_MINGW_ANSI_STDIO. The default is just that <stdio.h> declares printf. But when __USE_MINGW_ANSI_STDIO is defined as logical true, <stdio.h> provides an overriding definition of printf, that calls __mingw_vprintf. And as it happens the <cstdio> header defines (via an indirect include) __USE_MINGW_ANSI_STDIO before including <stdio.h>.

There is a comment in <_mingw.h>, "Note that we enable it also for _GNU_SOURCE in C++, but not for C case.".

In C++, with relevant versions of this compiler, there is effectively a difference between including <stdio.h> and using printf, or including <cstdio>, saying using std::printf;, and using printf.


Regarding

Also, how can it go until 55 decimal places? IEEE 754 floating point has only 52 bits for fractional number with which we can get 15 decimal digits of precision. It is stored in binary. How come its decimal interpretation stores more?

... it's just the decimal presentation that's longer. The digits beyond the precision of the internal representation, about 15 digits for 64-bit IEEE 754, are essentially garbage, but they can be used to reconstitute the original bits exactly. At some point they will become all zeroes, and that point is reached for the last digit in your C++ program output.


1Thanks to Dietrich Epp for finding that standards quote.

Cheers and hth. - Alf
  • 142,714
  • 15
  • 209
  • 331
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/181386/discussion-on-answer-by-cheers-and-hth-alf-why-does-double-in-c-print-fewer-d). – Samuel Liew Oct 06 '18 at 08:48
  • When a floating number is smaller than one, the 15-digits precision may not the case, see https://www.mathworks.com/help/matlab/ref/realmin.html – John Z. Li Oct 08 '18 at 09:06
10

It looks to me like both cases print 56 decimal digits, so the question is technically based on a flawed premise.

I also see that both numbers are equal to 0.1 within 52 bits of precision, so both are correct.

That leads to your final quesion, "How come its decimal interpretation stores more?". It doesn't store more decimals. double doesn't store any decimals. It stores bits. The decimals are generated.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • 7
    Only one of those numbers is equal to the IEEE754 representation of `0.1`, though (that is, the closest machine number to 0.1). – Federico Poloni Oct 05 '18 at 09:47
  • 3
    I agree with 1. and 3. points, however regarding point 2, I vaguely remember that this came up before, and that `printf` is required by the C standard to print all requested digits *exactly*, except for implementation-defined rounding of the last output digit. (The second output is correct for an IEEE 754 `double` as pointed out by Federico Poloni.) See e.g. [this previous question](https://stackoverflow.com/questions/24120888/why-printf-round-floating-point-numbers), specifically the answers of Yu Hao and dasblinkenlight. – Arne Vogel Oct 05 '18 at 10:49
  • 1
    `s/52/53/`: IEEE754 binary64 has 53 digits of precision, unless the number is a denormal. It's just that 52 of them are explicitly stored. – Ruslan Oct 05 '18 at 12:50
  • 4
    @Ruslan -- to be absolutely clear, IEEE 754 64-bit values have 53 **bits** (base 2) of precision. That's about 15 **digits** (base 10). – Pete Becker Oct 05 '18 at 13:24