13
// value will always be in the range of [0.0 - maximum]

float obtainRatio(float value, float maximum){
    if(maximum != 0.f){
        return value / maximum;  
    }else{
        return 0.f;
    }
}

The range of maximum can be anything, including negative numbers. The range of value can also be anything, though the function is only required to make "sense" when the input is in the range of [0.0 - maximum]. The output should always be in the range of [0.0 - 1.0]

I have two questions that I'm wondering about, with this:

  • Is this equality comparison enough to ensure the function never divides by zero?
  • If maximum is a degenerate value (extremely small or extremely large), is there a chance the function will return a result outside of [0.0 - 1.0] (assuming value is in the right range)?
Anne Quinn
  • 12,609
  • 8
  • 54
  • 101
  • Read up on IEEE 754 which (most) implementations of floating point values follow. http://en.wikipedia.org/wiki/IEEE_floating_point – caskey May 06 '14 at 21:58
  • Tangental: http://stackoverflow.com/questions/5095968/does-float-have-a-negative-zero-0f – user2864740 May 06 '14 at 22:00
  • 1
    @Clairvoire With the constraint `value in [0.0 … maximum]`, the only division by zero that can happen is `0.0 / 0.0`, which does not crash unless you set up your FPU so that it will (try it) but returns NaN, which it may make sense to test immediately or to let propagate further (the operations were designed to allow the latter strategy). – Pascal Cuoq May 06 '14 at 22:03
  • 2
    You either ought to make more noise when the caller passes nonsense or produce reasonable nonsense. Infinity is a lot less nonsensical than 0. Don't help. – Hans Passant May 06 '14 at 22:05
  • @HansPassant - Very true! In this case though, 0.0 isn't an uncommon maximum to be passed. This particular function is for a capacity measure for drawing gauges. Where the 'ratio' is 'how much gauge to draw'. So a ratio of 0.0 for a capacity of 0.0 is wanted in this case – Anne Quinn May 06 '14 at 22:12
  • 6
    Clarifying question: `float m = some_computation; if (m != 0.0) return some_other_computation / m;` Is it possible that `m` is in an 80 bit high precision float register and non-zero at the time of the comparison to zero, but kicked out of registers and back into 64 bit float, truncating to zero, before the division? **Is a conforming compiler allowed to perform this register scheduling**? – Eric Lippert May 06 '14 at 23:12
  • The input parameters are floats. The smallest non-zero float is 2^(-149), which is stored as a denormalized number, which would appear as hex 00000001. When converted to double precision (or 80 bit format), it will be converted to a normal double precision (or 80 bit) number. – rcgldr May 06 '14 at 23:57
  • @PascalCuoq those are some great details in your comments, you should add an answer or perhaps edit some of those details into the existing answer. I am always wary of important details in comments. – Shafik Yaghmour May 07 '14 at 12:36
  • @EricLippert I have moved my comments to an answer. – Pascal Cuoq May 07 '14 at 13:15
  • Why don't you test your function with extreme values? Even better, write unit tests for it. – Daniel Daranas May 07 '14 at 13:34
  • 3
    @DanielDaranas: Though of course that is a good idea, unit tests alone are insufficient in a world where the compiler (or in the case of languages like C#, the runtime) can do crazy things to change the precision of floating point operations on the fly. Unit tests only tell you how the program behaved once, not how it is allowed to possibly behave in the future. – Eric Lippert May 07 '14 at 15:27

2 Answers2

14

Here is a late answer clarifying some concepts in relation to the question:

Just return value / maximum

In floating-point, division by zero is not a fatal error like integer division by zero is. Since you know that value is between 0.0 and maximum, the only division by zero that can occur is 0.0 / 0.0, which is defined as producing NaN. The floating-point value NaN is a perfectly acceptable value for function obtainRatio to return, and is in fact a much better exceptional value to return than 0.0, as your proposed version is returning.

Superstitions about floating-point are only superstitions

There is nothing approximate about the definition of <= between floats. a <= b does not sometimes evaluate to true when a is just a little above b. If a and b are two finite float variables, a <= b evaluate to true exactly when the rational represented by a is less than or equal to the rational represented by b. The only little glitch one may perceive is actually not a glitch but a strict interpretation of the rule above: +0.0 <= -0.0 evaluates to true, because “the rational represented by +0.0” and “the rational represented by -0.0” are both 0.

Similarly, there is nothing approximate about == between floats: two finite float variables a and b make a == b true if and only if the rational represented by a and the rational represented by b are the same.

Within a if (f != 0.0) condition, the value of f cannot be a representation of zero, and thus a division by f cannot be a division by zero. The division can still overflow. In the particular case of value / maximum, there cannot be an overflow because your function requires 0 ≤ value ≤ maximum. And we don't need to wonder whether in the precondition means the relation between rationals or the relation between floats, since the two are essentially the same.

This said

C99 allows extra precision for floating-point expressions, which has been in the past wrongly interpreted by compiler makers as a license to make floating-point behavior erratic (to the point that the program if (m != 0.) { if (m == 0.) printf("oh"); } could be expected to print “oh” in some circumstances).

In reality, a C99 compiler that offers IEEE 754 floating-point and defines FLT_EVAL_METHOD to a nonnegative value cannot change the value of m after it has been tested. The variable m was set to a value representable as float when it was last assigned, and that value either is a representation of 0 or it isn't. Only operations and constants can have excess precision (See the C99 standard, 5.2.4.2.2:8).

In the case of GCC, recent versions do what is proper with -fexcess-precision=standard, implied by -std=c99.

Further reading

  • David Monniaux's description of the sad state of floating-point in C a few years ago (first version published in 2007). David's report does not try to interpret the C99 standard but describes the reality of floating-point computation in C as it was then, with real examples. The situation has much improved since, thanks to improved standard-compliance in compilers that care and thanks to the SSE2 instruction set that renders the entire issue moot.

  • The 2008 mailing list post by Joseph S. Myers describing the then current GCC situation with floats in GCC (bad), how he interpreted the standard (good) and how he was implementing his interpretation in GCC (GOOD).

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • My one nit would be that bringing reals in is a distraction. Finite floats are all rationals. – Eric Lippert May 07 '14 at 14:23
  • @EricLippert Reals are simpler! Rationals have strange properties (not all Cauchy sequences have a limit, not all quadratic equations with Δ>0 have two roots, …). But joking aside, I agree and I have changed all occurrences of “real” to “rational”. – Pascal Cuoq May 07 '14 at 14:40
4

In this case with the limited range, it should be OK. In general a check for zero first will prevent division by zero, but there's still a chance of getting overflow if the divisor is close to zero and the dividend is a large number, but in this case the dividend will be small if the divisor is small (both could be close to zero without causing overflow).

rcgldr
  • 27,407
  • 3
  • 36
  • 61