0

when I ran the code below I got exact same number in 2 different programs. is this just coincidence or there is some algorithm to make those additional garbage digits.

std::cout<<std::setprecision(20);
std::cout<< 3.14f;

Output:
3.1400001049041748047

康桓瑋
  • 33,481
  • 5
  • 40
  • 90
Vedant Patil
  • 11
  • 1
  • 2
  • 3
    Describing it as an algorithm is generous. A floating point variable has a mantissa, a sign, and an exponent. With a binary (base 2) mantissa, the values of a mantissa that can be represented are the sum of negative powers of 2. But a value of 0.1 (in decimal) cannot be represented as a finite sum of negative powers of two (do it by hand, and you'll see the number of terms is infinite). Essentially the value that is actually stored when you assign `a_float =3.14f` is the *closest approximation* that your implementation's `float` can represent to 3.14. The "garbage" is the difference. – Peter Oct 21 '22 at 12:35
  • @Peter: It is an algorithm. When performed in conformance with IEEE-754, using the binary32 format, there is a unique determined result. The mathematics is fully specified. – Eric Postpischil Oct 21 '22 at 18:31
  • 3
    @JasonLiam: Please do not promiscuously close floating-point questions as duplicates of that question. It interferes with elaborating on specific aspects of floating-point behavior and makes no more sense than closing C++ questions as duplicates of a question-and-answer stating C++ is complicated. This post asks a specific question and should get a specific answer. – Eric Postpischil Oct 22 '22 at 01:01
  • @VedantPatil Floating point on most computers works in *binary*, not decimal. Converting binary to hexadecimal, that number you're asking about is 0x1.91eb86 × 2¹. Given the finite precision of a `float`, the next smaller representable number is 0x1.91eb84 × 2¹, and the next bigger number is 0x1.91eb88 × 2¹. Converting back to decimal, those numbers are 3.139999866485595703125, 3.1400001049041748046875, and 3.14000034332275390625. They look like "garbage", but they're not: binary fractions always look like that, when you convert them back to decimal. – Steve Summit Oct 23 '22 at 14:34

2 Answers2

3

No, it is not just a coincidence.

In the format most commonly used for float, the representable value closest to 3.14 is 13,170,115 / 222. This equals 3.1400001049041748046875. Rounding this to twenty decimal digits gives 3.1400001049041748047.

The mathematics for those operations will be the same every time. However, the C++ standard is not strict on the format used for float, the conversion from 3.14f in source code to float, or the conversion when formatting for output, so different C++ implementations may do operations different from the correct rounding above and so may give different results. However, these particular results are those obtained in best practice, and they are not just coincidence.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
1

As you probably know, floating-point numbers have finite (not infinite) precision.

In decimal it's pretty obvious what finite precision looks like. For example, if you have 7 digits of available precision, here are some of the numbers you can represent:

123.4567 = 1.234567 × 10²
123.4568 = 1.234568 × 10²
123.4569 = 1.234569 × 10²

But if you wanted to use the number 123.45678, you couldn't; you'd have to choose either 123.4567 or 123.4568.

But computer floating-point doesn't usually use base 10; most of the time it uses binary. And it has finite precision — but the finite precision is limited to a certain number of binary bits of significance, not decimal digits.

So, if we look at the available numbers in binary (or, more or less equivalently, hexadecimal), they'll look pretty reasonable — but if we convert them to decimal, they'll look kind of strange.

Here's what I mean. Here's a fragmentary range of available float numbers, in binary, hexadecimal, and decimal. In single precision (that is, float), you're basically allowed 23 bits past the binary point.

hexadecimal binary decimal
0x1.91eb84 × 2¹ 1.10010001111010111000010 × 2¹ 3.1399998664855957031250
0x1.91eb86 × 2¹ 1.10010001111010111000011 × 2¹ 3.1400001049041748046875
0x1.91eb88 × 2¹ 1.10010001111010111000100 × 2¹ 3.1400003433227539062500

So even though those numbers like 3.1400001049041748046875 in the third column do look pretty "weird" (you called them "garbage"), they actually correspond to "nice, even" binary/hexadecimal numbers that the processor is actually using internally.

The bottom line is that if you want to represent 3.14, you can't — you have to choose either 3.139999866485595703125 or 3.1400001049041748046875.

And, finally, since you only asked for 20 digits past the decimal, you got 3.1400001049041748047 (which is that number 3.1400001049041748046875, rounded to 20 places).

Steve Summit
  • 45,437
  • 7
  • 70
  • 103