17

Given the statements

float f = 7.1f;
double d = f;

What can we assert in a unit test about d?


For example this does not work:

Console.WriteLine(d == 7.1d); // false
Console.WriteLine(d < 7.1d + float.Epsilon); // true by luck
Console.WriteLine(d > 7.1d - float.Epsilon); // false (less luck)

The best way I found so far is to convert the value back:

float f2 = (float)d;
Console.WriteLine(f2 == f); // true

Which would be the same as the brute way to say

Console.WriteLine(d == 7.1f); // 7.1f implicitly converted to double as above

This question is NOT about double and float precision in general but really JUST about the pragmatic question how a unit test can best describe the confines of d. In my case, d is the result of a conversion that occurs in code generated by light weight code generation. While testing this code generation, I have to make assertions about the outcome of this function and this finally boils down to the simple question above.

Salman
  • 1,380
  • 7
  • 25
  • 41
citykid
  • 9,916
  • 10
  • 55
  • 91
  • Which unit testing framework are you using? You might not necessarily assert against a condition you derive e.g. `Assert.IsTrue(condition)`, you might be able to use e.g. `Assert.AreEqual(value1, value2)` that could handle equivalence between numeric formats. – StuperUser Dec 19 '12 at 13:03
  • I'm still not sure what you're trying to test here. A double has more precision than a float and they will thus hardly ever be "equal" to each other. – CodeCaster Dec 19 '12 at 13:07
  • If you omit the final f in the constant used for initializing the float, it might be even worse, see a crafted example here http://stackoverflow.com/questions/13276862/c-c-notation-of-double-floating-point-values/13279512#13279512 – aka.nice Dec 19 '12 at 14:17

3 Answers3

4

Your "best way" is asserting that your generated code returns something that is, within float's margin of error, 7.1. This may be what you want to check, in which case, carry on.

On the other hand, you might want to assert that your generated code returns specifically the result of casting 7.1f to a double, in which case you could do:

Console.WriteLine(d == (double)f);

This is more stringent - your test asserts that d is within a small range, while the above test asserts that d is a specific value.

It really depends on what you'll be using d for. If it's a case where things will go wrong if it's not the exact value, test the exact value, but if it's OK to be within a float of the value, check against the float.

Rawling
  • 49,248
  • 7
  • 89
  • 127
  • Rawling, that is it for me. I expect my code to convert 7.1f into a double, so my assertion now is d.Should().Be((double)7.1f); that makes my expectation most clear. thx for your input. – citykid Dec 19 '12 at 13:05
1

To compare two float point values ibm sugests to test abs(a/b - 1) < epsilon

an msnd states that Epsilon property reflects the smallest positive value that is significant in numeric operations or comparisons when the value of the instance is zero.

so actually you should check

Math.Abs(d/(double)f) - 1) < float.Epsilon)
Dmitrii Dovgopolyi
  • 6,231
  • 2
  • 27
  • 44
  • thx for the hint from ibm: If you don't know the scale of the underlying measurements, using the test "abs(a/b - 1) < epsilon" is likely to be more robust than simply comparing the difference. – citykid Dec 19 '12 at 13:27
  • +1. Yes, it makes no sense to add epsilon to a possibly big number. It would have no effect at all. This would only work for fixed point numbers. – Olivier Jacot-Descombes Dec 19 '12 at 15:34
  • The document you point to is published by IBM, but it is identified as authored by a person and corporation other than IBM. So it is not clear IBM is making that suggestion, anymore than a book publisher agrees with everything the authors they publish state. And it is not entirely fair to accuse IBM of promoting this sloppy practice based on that. – Eric Postpischil Dec 19 '12 at 18:45
  • @EricPostpischil how exactly is this 'sloppy'? To me, it looks like a fairly reasonable approximation of how to test the underlying question. – RonLugge Aug 27 '13 at 15:54
  • @RonLugge: Reasons it is sloppy are too numerous and complex to detail in a comment. Suffice it to say that this technique decreases “false” not-equal results at the expense of increasing false is-equal results, the value `epsilon` should generally not be the `Epsilon` this answer refers to, a relative threshold is not the correct criterion to use if the error derives from values other than the final result, this test is inappropriate for the situation in this question where exact equality may be tested for, and software generally ought to be designed to avoid a need for these comparisons. – Eric Postpischil Aug 27 '13 at 16:04
1

(float) d == f.

Another answer suggested d == (double) f, but this is a useless test because (double) f performs the same conversion that d = f implicitly performs. So the only thing this assertion could be testing is whether some aspect of the implementation is broken (e.g., the compiler implemented one of the conversions incorrectly and in a way different from the other), some external mechanism altered d or f between the assignment and the assertion, or the source code were broken so that d was neither double nor float nor any type that can hold the value of f exactly or the assignment d = f was not performed.

Generally, we expect no floating-point error, because, in every normal implementation of floating-point, converting from a narrower precision to a wider precision of the same radix has no error, since the wider precision can represent every value the narrower precision can. In uncommon situations, a wider floating-point format might have a smaller exponent range. Only in this case, or in perversely defined floating-point formats, could converting to a wider format cause a change in value. In these cases, performing the same conversion would not detect the change.

Instead, we convert from the wider format back to the narrower format. If d differs from f, this conversion has a chance of detecting the error. E.g., suppose f contained 0x1p-1000, but, for some reason, that is not representable in the format of d, so it was rounded to zero. Then (float) d == f evaluates to (float) 0 == 0x1p-1000, then to 0 == 0x1p-1000, then to false. Additionally, this test may detect the same errors as the other suggestion: a broken implementation, alteration of d or f, an incorrect type of d, and a missing assignment of d = f.

Other than that, what errors would you be trying to detect with an assertion here?

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • It's worthwhile to note that conversion from `float` to `Decimal`, or `double` to `Decimal` may be lossy *even for values which are precisely representable in both formats*. For example, converting `16777215f` to `Decimal` yields a value of 16777220, even though the `float` precisely represents the value 16,777,215, and a `Decimal` can also hold that value. – supercat Dec 21 '12 at 19:06
  • @supercat: “… in every normal implementation, converting… to a wider precision **of the same radix** has no error…”. – Eric Postpischil Dec 21 '12 at 19:11
  • It's true that one cannot not expect every `float` value to be *representable* in `Decimal`, given that they use different radix, and one certainly cannot not expect a conversion to be precise in cases where the destination format has no representation for the value in question. My intention was not to contradict you, but rather to emphasize that if radixes differ, even values which are precisely representable in old and new formats may convert strangely (perhaps the rounding of 16777215f to 16777220m is documented, but it seems odd to say the least). – supercat Dec 21 '12 at 22:17
  • @supercat: If a conversion from `float` to `Decimal` produces a different value than the original `float` even though it is exactly representable in `Decimal`, then the software that performs the conversion is defective. – Eric Postpischil Aug 27 '13 at 16:08
  • My particular example above was defective, since I'd been using a wrong conversion method, and the real problems don't appear until numbers get larger, but what happens is that the conversion from `Double` to `Decimal` assumes that a `double` value like 12345678.900000000372529 (which would be the best representation for values between 12345678.89999999944120646 and 12345678.9000000013038516) is "more likely" to be intended to represent 12345678.9 exactly than 12345678.900000000372529. The routine is pretty sloppy, though, since it rounds off what should clearly be relevant bits of precision. – supercat Aug 27 '13 at 16:41