1

Before you link me to Is floating point math broken? - please read the question first. I'm aware how this works. The question is specific to the two numbers I've found (well, presumably a lot more such pairs exist, but I would like this specific behavior to be explained).

I have two floating point constants: 1.9 and 1.1. Using Python, I've multiplied both of these by their reciprocals and got the following results:

>>> x = 1.1
>>> y = 1.9
>>> print("%.55f" % x)
1.1000000000000000888178419700125232338905334472656250000
>>> print("%.55f" % y)
1.8999999999999999111821580299874767661094665527343750000
>>> print("%.55f" % (1/x))
0.9090909090909090606302811465866398066282272338867187500
>>> print("%.55f" % (1/y))
0.5263157894736841813099204046011436730623245239257812500
>>> print("%.55f" % ((1/x) * x))
1.0000000000000000000000000000000000000000000000000000000
>>> print("%.55f" % ((1/y) * y))
0.9999999999999998889776975374843459576368331909179687500

I'm of course aware of the issues with floating-point arithmetic as implemented in hardware, but I cannot understand why these numbers show such different behavior. Consider their reciprocals; their exact results as follows (computed using WolframAlpha, later digits omitted for brevity):

  • 1/x: 0.909090909090909017505915727262383419481191148103102677263...
  • 1/y: 0.526315789473684235129596113576877392185605155259237553584...

Clearly the reciprocals computed by my hardware are correct until around the 15th-16th decimal digit for both numbers; no difference there. What else could be different? How is it possible that (1/x) * x is exactly 1, without any "garbage" at the end?

For the sake of completeness I should perhaps mention that I'm on an x86 PC, so this is all 64-bit IEEE 754 arithmetic:

>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
user4520
  • 3,401
  • 1
  • 27
  • 50
  • Possible duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Marcus Müller Oct 11 '16 at 20:59
  • @MarcusMüller I think I have clearly stated that I'm aware of how floating point works, but these two numbers in particular show interesting behavior that I would like to be explained. – user4520 Oct 11 '16 at 21:00
  • 1
    I understand your objection, but first of all: I've read your question :) you should probably also add a tag that specifies the programming language you used (probably python-3.something ?). – Marcus Müller Oct 11 '16 at 21:09
  • 1
    but the fact that you wonder why (1/z)*z are different for different z proves that you've *not* fully understood floating point numbers, sorry. – Marcus Müller Oct 11 '16 at 21:11
  • @MarcusMüller The question you've linked to explains the basics of the IEEE 754 format and why the representation has to be inexact (and that this is the source of the errors the OP was reporting); there isn't much to understand here, it's more a matter of being aware that this is how it all works. What I'm missing is an intuition, and I think it's okay to ask to be walked step by step through what's actually happening when these two numbers are multiplied and where the bits go - the post you linked to doesn't provide such an explanation as far as I've seen. – user4520 Oct 11 '16 at 21:19
  • 1
    @MarcusMüller And indeed, I haven't understood the subject fully - otherwise there would be no question to be asked! But I think that the fact that a particular matter is already covered partially elsewhere on SO doesn't mean a new question related to it and asking about something else cannot be started. – user4520 Oct 11 '16 at 21:20
  • ok, I'd love to help you here, but I don't really know how to: you say you can understand the binary format of a IEEE754 64 bit float, right? and you say you've read the answer containing the section "cause of rounding error"; what else would you like to ask? In one case, rounding leads to exactly 1, in the other, it doesn't. – Marcus Müller Oct 11 '16 at 21:22
  • @MarcusMüller I'd like a deeper understanding of what _exactly_ is happening with these two specific numbers. I already know that in one case the result happened to get rounded to one because it was so close, and the other result did not because the difference was large enough, but where does this _difference in differences_ exactly come from? I'm just looking for a detailed analysis of this particular case, more detailed than "the computation is inexact and introduces rounding error so we have this inconsistency here" (this is my current state of knowledge). – user4520 Oct 11 '16 at 21:33
  • 1
    any number slightly smaller than one and one and number slightly larger than one are all possible results of (1/z)*z,which is exactly what the nature of limited-length arithmetics is.So you might feel free to pick all hardware and software implementations of dividing IEEE754-conformly(there's not a single "correct" way),and try until you get exactly your results.Since I don't know which FPU/CPU you're using,or which floating point instructions your scripting language uses and wasn't involved in the implementation of any of these, "it's implementation-specific" is the best I can do – Marcus Müller Oct 11 '16 at 21:50
  • “How is it possible that (1/x) * x is exactly 1, without any "garbage" at the end?” Because in the same way that `/` rounds the mathematical result to the nearest representable double-precision number, `*` rounds the result to the nearest representable number. `1.0` is a representable double-precision number, and it is a very good candidate (although not the only one) for the result of `x * (1 / x)` . – Pascal Cuoq Oct 12 '16 at 10:36

1 Answers1

4

I wrote a Java program, at the end of this answer, that shows what is going on. I used Java because BigDecimal provides a convenient way of doing exact printouts and simple calculations.

IEEE floating point operations act as though the computer first calculated the exact result, and then rounds that to the nearest representable value. Using BigDecimal, I can display the exact product, and then show the results of rounding it up and down.

I did this for each of your inputs, and also for 2.0 to test the program when the product is exactly 1.0.

Here is the output for x:

x=1.100000000000000088817841970012523233890533447265625
1/x=0.90909090909090906063028114658663980662822723388671875
Exact product = 1.00000000000000004743680196125668585607481256497662217592534562686512611406897121923975646495819091796875
Round down = 1
Round down error = 4.743680196125668585607481256497662217592534562686512611406897121923975646495819091796875E-17
Round up = 1.0000000000000002220446049250313080847263336181640625
Round up error = 1.7460780296377462222865152105318744032407465437313487388593102878076024353504180908203125E-16

The exact answer is bracketed by 1.0 and a slightly larger value, but 1.0 is the closer and therefor the round-to-nearest result of the multiplication.

y=1.899999999999999911182158029987476766109466552734375
1/y=0.52631578947368418130992040460114367306232452392578125
Exact product = 0.99999999999999989774261615294611071381321879759485743989659632495470287238958917441777884960174560546875
Round down = 0.99999999999999988897769753748434595763683319091796875
Round down error = 8.76491861546176475617638560667688868989659632495470287238958917441777884960174560546875E-18
Round up = 1
Round up error = 1.0225738384705388928618678120240514256010340367504529712761041082558222115039825439453125E-16

In this case, the exact product is bracketed by 1.0 and something slightly smaller. The smaller value is closer to the exact result.

Finally:

Exact case=2
1/Exact case=0.5
Exact product = 1.0

In both your test cases, the exact product cannot be represented in Java double, IEEE 754 64-bit binary floating point. The result is the nearest representable to the exact product. The difference is how close the exact product was to 1.0 and to the other bracketing representable number.

Here is the program:

import java.math.BigDecimal;

public class Test {
  public static void main(String[] args) {
    testit(1.1, "x");
    testit(1.9, "y");
    testit(2.0, "Exact case");
  }

  private static void testit(double val, String name) {
    BigDecimal valBD = new BigDecimal(val);
    System.out.println(name + "=" + valBD);
    double inv = 1 / val;
    BigDecimal invBD = new BigDecimal(inv);
    System.out.println("1/" + name + "=" + invBD);
    BigDecimal exactProduct = valBD.multiply(invBD);
    System.out.println("Exact product = " + exactProduct);
    double rawRound = exactProduct.doubleValue();
    BigDecimal rawRoundBD = new BigDecimal(rawRound);
    int comp = rawRoundBD.compareTo(exactProduct);
    double down = 0;
    BigDecimal downBD;
    double up = 0;
    BigDecimal upBD;
    if (comp == 0) {
      return;
    } else if (comp < 0) {
      down = rawRound;
      up = Math.nextUp(down);
    } else {
      up = rawRound;
      down = Math.nextDown(up);
    }
    downBD = new BigDecimal(down);
    upBD = new BigDecimal(up);
    BigDecimal downError = exactProduct.subtract(downBD);
    BigDecimal upError = upBD.subtract(exactProduct);
    System.out.println("Round down = " + downBD);
    System.out.println("Round down error = " + downError);
    System.out.println("Round up = " + upBD);
    System.out.println("Round up error = " + upError);
    System.out.println();

  }

}
Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75