3

Today while reading a reference book I encountered with an example saying -

echo (int) ((0.1 + 0.7) * 10); // output 7

Because ((0.1 + 0.7) * 10) is internally evaluated as 7.999999 and when converted to int, results 7.

I also found it right. See Codepad.

But when i tried with some more examples i found something strange, Like -

echo (int) ((0.2 + 0.7) * 10); // output 9 (Codepad)

echo (int) ((0.7 + 0.7) * 10); // output 14 (Codepad)

And many more. For every time I changed the values, it gives me correct answer.

I want to know that why only ((0.1 + 07) * 10) produce result differ than the other ones.

Is it really strange or i am missing something?

Ashwini Agarwal
  • 4,828
  • 2
  • 42
  • 59
  • 4
    this is only happening with `0.1 + 0.7`. Even if we change the order, for other numbers they evaluate properly. Don't know what is happening here, but this question is my favourite question.....:) – Yogesh Suthar Apr 24 '13 at 09:24
  • 1
    @YogeshSuthar.. that's the thing which really confuses me. – Ashwini Agarwal Apr 24 '13 at 09:28
  • 1
    Let see if any *Jon Skeet* of PHP who can answer this. – Rikesh Apr 24 '13 at 09:37
  • 1
    this has been answered many time. Check this http://stackoverflow.com/q/6439140/1687983.. – Coder anonymous Apr 24 '13 at 09:42
  • and [this](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Coder anonymous Apr 24 '13 at 09:42
  • and this http://stackoverflow.com/q/11573378/1687983 – Coder anonymous Apr 24 '13 at 09:43
  • and this http://stackoverflow.com/q/873747/1687983 – Coder anonymous Apr 24 '13 at 09:43
  • this will solve your query i guess.. – Coder anonymous Apr 24 '13 at 09:43
  • 2
    You seem to not have tried `(0.6 + 0.7) * 10`. – Daniel Fischer Apr 24 '13 at 09:46
  • 2
    Hint: 0.1 in base 2 has infinite digits. – Álvaro González Apr 24 '13 at 09:47
  • 1
    The short version is: rounding. First, you get the closest `double` value to `0.7` etc. That is sometimes a bit larger than the decimal fraction you typed, sometimes a bit smaller. When you add two such numbers, the result is rounded to the precision available in the type (you have a fixed number of significant digits [in base 2]). That may round up or down. Then the multiplication by 10 causes another rounding step [usually, not always]. If the two roundings go in different directions (one up, one down), they often cancel out, if both round down, you get a result like 7.999... – Daniel Fischer Apr 24 '13 at 09:56
  • 2
    @Coderanonymous: None of the pages you link to explicitly state why `.1+.7` is less than 8 but `.2+.7` is not less than 9. That is, although they may provide information by which the question can be answered eventually, they do not actually answer the question. – Eric Postpischil Apr 24 '13 at 13:55

5 Answers5

3

In the common double-precision format, numbers are represented with a sign bit, an 11-bit exponent, and a 53-bit fraction portion that is called a significand. The significand is always a 53-bit non-negative integer divided by 252 (which can also be written in binary as one binary digit, a radix point, and 52 more binary digits).

.1 cannot be represented exactly. It is represented with an exponent of -4 and a significand of 7205759403792794 / 252. That is, the closest double to .1 is 7205759403792794•2-52•2-4 = 0.1000000000000000055511151231257827021181583404541015625.

The closest double to .7 has a significand of 6305039478318694 / 252 and an exponent of -1; it is 6305039478318694•2-52•2-1 = 0.6999999999999999555910790149937383830547332763671875.

When you add these two numbers, the result is 0.7999999999999999611421941381195210851728916168212890625. This is also not exactly representable in a double; it has to be rounded to the nearest representable value and, when you multiply by 10, that has to be rounded again. However, you can see that the sum is less than .8. The final result is less than 8, so conversion to an integer truncates it to 7.

The double nearest .8 is 0.8000000000000000444089209850062616169452667236328125. When you add that to 0.1000000000000000055511151231257827021181583404541015625, the sum is 0.9000000000000000499600361081320443190634250640869140625. As you can see, it is greater than .9. The final result of rounding and multiplying by 10 will be 9 or greater, so conversion to an integer produces 9.

The fact that several other values you tried did not round down is merely happenstance. Every value that is not exactly representable falls somewhere between two representable values, one higher and one lower. Some are closer to the higher value and some are closer to the lower value, and you just happened to pick values that were closer to a higher representable value and were rounded upward.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
1

Saw in PHP doc :

Never cast an unknown fraction to integer, as this can sometimes lead to unexpected results.

<?php
    echo (int) ( (0.1+0.7) * 10 ); // echoes 7!
?>

So yeah, result is strange 'cause unexpected (internal representation of (0.1+0.7) * 10 is like 7.99999...)

More information about Floating Point Precision in the PHP doc.

Marcassin
  • 1,386
  • 1
  • 11
  • 21
0

It's due to rounding imprecision when working with floats For example:

0.1 + 0.7 = 0.7999999999
0.7999999999 * 10 = 7.999999999
floor(7.999999999) = 7

The way to fix this is to round before typecasting.

chandresh_cool
  • 11,753
  • 3
  • 30
  • 45
0

Computers deal in absolutes; they cannot at the bit level explicitly represent a fraction. This is worked around using floating-point representation, which is a method of representing an approximation to a real number.

Unfortunately, some numbers are just impossible to represent 100% accurately and this leads to imprecision when dealing with floating-point arithmetic.

For more information on precisely why this is the case, do some research into floating-point representation. But it's quite a mathematically technical subject.

EDIT

Let me clarify this. We all know that computers deal in binary. They read, write and process bits which are either 1 or 0.

A 32-bit processor will typically divide memory into 4-byte chunks. So the default size for ints and floats are 4 bytes, or 32 bits. Representing a whole number (an int) in binary is easy. The number 8 is: 00000000 00000000 00000000 00001000. But how does a computer represent a decimal number? Remember that it can only see 1s and 0s; it cannot just place a "." in the middle of them!

Fixed-point representation (e.g., saying the first 16 bits is the integral value (the part before the .), the second 16 bits is the fractional value) significantly limits the range of numbers that can be represented, since it reduces the maximum number to a 16-bit int and potentially wastes all the bits after the "." which may not be needed.

So computers use a technique called floating-point representation, where the number is scaled using an encoded exponent. So part of the number is the exponent and part is the fraction. This massively increases the range of possible numbers compared to fixed-point notation. But some numbers just cannot be represented to complete precision.

This is why any computer system dealing with currencies never stores values as floats (e.g., £1.10 will always be stored as 110p). Any system where precision is essential should perform as much arithmetic as ints and do any division into floats as the last step.

Note that this is not just a PHP issue, it exists in all languages. E.g., JavaScript:

alert((0.1+0.7)*10); // alerts 7.9999999999
daiscog
  • 11,441
  • 6
  • 50
  • 62
-1

Use intval instead, it's much more reliable.

silkfire
  • 24,585
  • 15
  • 82
  • 105