As you probably know, floating-point numbers have finite (not infinite) precision.
In decimal it's pretty obvious what finite precision looks like. For example, if you have 7 digits of available precision, here are some of the numbers you can represent:
123.4567 = 1.234567 × 10²
123.4568 = 1.234568 × 10²
123.4569 = 1.234569 × 10²
But if you wanted to use the number 123.45678
, you couldn't; you'd have to choose either 123.4567
or 123.4568
.
But computer floating-point doesn't usually use base 10; most of the time it uses binary. And it has finite precision — but the finite precision is limited to a certain number of binary bits of significance, not decimal digits.
So, if we look at the available numbers in binary (or, more or less equivalently, hexadecimal), they'll look pretty reasonable — but if we convert them to decimal, they'll look kind of strange.
Here's what I mean. Here's a fragmentary range of available float
numbers, in binary, hexadecimal, and decimal. In single precision (that is, float
), you're basically allowed 23 bits past the binary point.
hexadecimal |
binary |
decimal |
0x1.91eb84 × 2¹ |
1.10010001111010111000010 × 2¹ |
3.1399998664855957031250 |
0x1.91eb86 × 2¹ |
1.10010001111010111000011 × 2¹ |
3.1400001049041748046875 |
0x1.91eb88 × 2¹ |
1.10010001111010111000100 × 2¹ |
3.1400003433227539062500 |
So even though those numbers like 3.1400001049041748046875 in the third column do look pretty "weird" (you called them "garbage"), they actually correspond to "nice, even" binary/hexadecimal numbers that the processor is actually using internally.
The bottom line is that if you want to represent 3.14
, you can't — you have to choose either 3.139999866485595703125
or 3.1400001049041748046875
.
And, finally, since you only asked for 20 digits past the decimal, you got 3.1400001049041748047
(which is that number 3.1400001049041748046875
, rounded to 20 places).