0

What is the safest way to divide two IEEE 754 floating point numbers?

In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.

I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:

(a * cf) / (b * cf)

But I'm not sure this makes a difference in division?

Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.

Edit:

I've been asked in the comments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the comments:

When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a common pitfall.

The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.

halfer
  • 19,824
  • 17
  • 99
  • 186
Thomas Watson
  • 6,507
  • 5
  • 33
  • 43
  • 6
    What specific pitfall are you alluding to ? What's your exact problem ? In the general case there's nothing better than `/` to divide two js numbers to get another js number. – Denys Séguret Jul 17 '14 at 08:21
  • 1
    There's already a post with a workaround - http://stackoverflow.com/questions/1458633/elegant-workaround-for-javascript-floating-point-number-problem – Jan Biasi Jul 17 '14 at 08:26
  • I should add that all the posts I've found here on SO on the subject have talked about adding or multiplying. But the reason I write a new question is because I don't know if division behaves any different? I think that constitutes a legitimate question – Thomas Watson Jul 17 '14 at 08:35
  • 4
    Avoiding the "normal floating-point pitfalls" is far too wide a goal, and likely unachievable. What *specific* problems are you trying to avoid? What *specifically* is going wrong with `a / b`? Adding so-called "correction factors" out of sheer superstition is a fairly horrible idea. :-) – Mark Dickinson Jul 17 '14 at 08:56
  • Mark & dystroy: Sorry for not being more clear on the matter. To be honest I don't know exactly what can go wrong, only that floating points can be very tricky. E.g. when adding floating point numbers weird stuff like "0.1 + 0.2 == 0.30000000000000004" happens. I have not been able to figure out if similar stuff can happen when dividing, and in that case how to work around it. I guess my question should have stated that. – Thomas Watson Jul 17 '14 at 09:05
  • 4
    just divide them already! – Alnitak Jul 17 '14 at 09:16
  • Alnitak: Ok, just to be sure I understand you correctly, then there is no risks involved in floating point division (like the once we see for instance when adding)? – Thomas Watson Jul 17 '14 at 09:18
  • 2
    Nope - you'll get exactly the same problems as when adding and multiplying - there's nothing "special" or "different" about division. The only mitigation for all "floating point errors" is to conceptually decouple the values _stored_ from the values _presented_ (i.e. use `.toFixed(n)` when outputting). – Alnitak Jul 17 '14 at 09:19
  • 1
    "This is just one example of a common pitfall" - this is just the way binary representation of floating point numbers works. You can no more represent 0.1 exactly in binary than you can 1/3 in decimal. It's not a pitfall; can't be avoided. – duffymo Jul 17 '14 at 12:25
  • One pitfall I can think of is doing something like this: if (b != 0.0) { c = a / b; } b could in fact be something very small like 2^-1022 which is likely to be disastrous. – HexedAgain Jul 17 '14 at 17:08
  • @HexedAgain: There is nothing “disastrous” about that. If `a` is small enough that `a/b` rounds to a representable value, that value will result. Otherwise the result will be infinity. Where is the disaster? Why are you scaremongering? – Stephen Canon Jul 17 '14 at 19:18
  • @stephencanon, really!? I'm not sure how you interpret disastrous, but it isn't just software crashing. Disastrous can also be all your users taking legal action because your a / b wiped out their all their funds, disastrous can also be planes falling out of the sky, and so on... Sure, in a toy program there is nothing much to worry about other than the odd head-scratching here and there but if you are releasing software that will be paid for and used by others, division by zero (for all practical purposes) is typically something bad - very bad. – HexedAgain Jul 17 '14 at 20:08
  • 2
    @HexedAgain: Yes, really. `a/b` doesn’t wipe out someone’s funds “because infinity”. Planes don’t fall out of the sky “because infinity”. Infinity is not an exceptional value in floating-point, and does not cause bugs on its own. Software that misbehaves when presented with presented with infinity is buggy software, full stop. Consider also that infinity is a strictly better result than any other result that could be returned; when signed integer arithmetic overflows, the result is undefined, but I rarely see people claiming on SO that "`a*b` could be disastrous!”. – Stephen Canon Jul 18 '14 at 09:48
  • @StephenCanon, Where did I say "a/b wipes out someone’s funds “because infinity”"? Really please show me where you got that because it looks like you're just brow-beating here. Again, I simply don't agree with you. if the expected range of a calculation lies in [A,B] and one computes an answer many orders of magnitude greater than B (or lower than A) because of division by "zero", then this error may well be costly at runtime - NOT because of infinity but because of the end manifestation of this calculation, and the attempt to avoid it with if (b!= 0) completely failed. – HexedAgain Jul 18 '14 at 11:03
  • 1
    If the expected range of the input to a calculation is `[a,b]`, then you either (a) formally prove that the value you are supplying will lie in that range, (b) defensively clamp the input to that range, or possibly (c) do both. Not doing so is a bug, and this is not unique to division, nor is it unique to floating-point. A naive attempt to avoid it with `if (b != 0)` without trying to understand the actual issue is also a bug. The problem is not division. – Stephen Canon Jul 18 '14 at 11:08
  • I never said the issue was with division itself (please show me where I did) - I said the issue was with what you now realise is a naive attempt to guard against the division by "zero" with if (b != 0.0). I also said, and I believe this can be justified, that this is likely to be dangerous. This kind of calculation will likely end up with things happening at runtime that are well beyond the realm of expectation or safety. – HexedAgain Jul 18 '14 at 11:13
  • 2
    I am voting to reopen because the "duplicate" does not give a direct answer to this question, whether pre-scaling would improve the accuracy of floating point division. I disagree with the idea of pre-scaling, but I think it is a valid question. – Patricia Shanahan Jul 18 '14 at 15:10

3 Answers3

6

The safest way is to simply divide them. Any prescaling will either do nothing, or increase rounding error, or cause overflow or underflow.

If you prescale by a power of two you may cause overflow or underflow, but will otherwise make no difference in the result.

If you prescale by any other number, you will introduce additional rounding steps on the multiplications, which may lead to increased rounding error on the division result.

If you simply divide, the result will be the closest representable number to the ratio of the two inputs.

IEEE 754 64-bit floating point numbers are incredibly precise. A difference in one part in almost 10^16 can be represented.

There are a few operations, such as floor and exact comparison, that make even extremely low significance bits matter. If you have been reading about floating point pitfalls you should have already seen examples. Avoid those. Round your output to an appropriate number of decimal places. Be careful adding numbers of very different magnitude.

The following program demonstrates the effects of using each power of 10 from 10 through 1e20 as scale factor. Most get the same result as not multiplying, 6.0, which is also the rational number arithmetic result. Some get a slightly larger result.

You can experiment with different division problems by changing the initializers for a and b. The program prints their exact values, after rounding to double.

import java.math.BigDecimal;

public class Test {
  public static void main(String[] args) {
    double mult = 10;
    double a = 2;
    double b = 1.0 / 3.0;
    System.out.println("a=" + new BigDecimal(a));
    System.out.println("b=" + new BigDecimal(b));
    System.out.println("No multiplier result="+(a/b));
    for (int i = 0; i < 20; i++) {
      System.out.println("mult="+mult + " result="+((a * mult) / (b * mult)));
      mult *= 10;
    }
  }
}

Output:

a=2
b=0.333333333333333314829616256247390992939472198486328125
No multiplier result=6.0
mult=10.0 result=6.000000000000001
mult=100.0 result=6.000000000000001
mult=1000.0 result=6.0
mult=10000.0 result=6.000000000000001
mult=100000.0 result=6.000000000000001
mult=1000000.0 result=6.0
mult=1.0E7 result=6.000000000000001
mult=1.0E8 result=6.0
Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
5

Floating point division will produce exactly the same "pitfalls" as addition or multiplication operations, and no amount of pre-scaling will fix it - the end result is the end result and it's the internal representation of that in IEEE-754 that causes the "problem".

The solution is to completely forget about these precision issues during calculations themselves, and to perform rounding as late as possible, i.e. only when displaying the results of the calculation, at the point at which the number is converted to a string using the .toFixed() function provided precisely for that purpose.

Alnitak
  • 334,560
  • 70
  • 407
  • 495
0

.tofixed() is not a good solution to divide float numbers. Using javascript try : 4.11 / 100 and you will be surprised.

4.11 / 100 = 0.041100000000000005

Not all browsers get the same results. Right solution is to convert float to integer:

parseInt(4.11 * Math.pow(10, 10)) / (100 * Math.pow(10, 10)) = 0.0411
user634062
  • 13
  • 4