11

Consider for example the following double-precision numbers:

x = 1232.2454545e-89;
y = -1232.2454545e-89;

Can I be sure that y is always exactly equal to -x (or Matlab's uminus(x))? Or should I expect small numerical differences of the order or eps as it often happens with numerical computations? Try for example sqrt(3)^2-3: the result is not exactly zero. Can that happen with unary minus as well? Is it lossy like square root is?

Another way to put the question would be: is a negative numerical literal always equal to negating its positive counterpart?

My question refers to Matlab, but probably has more to do with the IEEE 754 standard than with Matlab specifically.

I have done some tests in Matlab with a few randomly selected numbers. I have found that, in those cases,

This suggests that the answer may be affirmative. If applying unary minus only changes the sign bit, and not the significand, no precision is lost.

But of course I have only tested a few cases. I'd like to be sure this happens in all cases.

Luis Mendo
  • 110,752
  • 13
  • 76
  • 147
  • It seems like this might depend on the [rounding mode](https://en.wikipedia.org/wiki/Floating_point#Rounding_modes). Are you interested in integers in particular or arbitrary floating point values. – horchler Dec 02 '15 at 19:43
  • @horchler Arbitrary floating point values. Yes, I guess rounding toward zero would be needed for the answer to be affirmative. Is it known whicn rounding mode Matlab uses? – Luis Mendo Dec 02 '15 at 19:45
  • Don't know. The default rounding mode for IEEE-754 is ["symmetric"](https://en.wikipedia.org/wiki/Rounding#Round_half_to_even). I'd guess that it uses that, though it can also be system dependent. – horchler Dec 02 '15 at 20:02
  • Apparently there used to be a means for changing the default rounding and precision via the [undocumented `feature`/`system_dependent` function](http://undocumentedmatlab.com/blog/undocumented-feature-function/). – horchler Dec 02 '15 at 20:21
  • @horchler why would the rounding mode import? Consider the exact value of x. If -x is exactly representable (and it is, since we just have to change the sign bit assuming IEEE 754), then no rounding operation occur. Negation is an exact operation. – aka.nice Dec 02 '15 at 20:41
  • @aka.nice That makes sense, and is the kind of answer that I hope for. Is there any proof that that's the case? Any reference? – Luis Mendo Dec 02 '15 at 20:57
  • I'm pretty sure `-` should just flip the sign bit. Even when parsing strings, I imagine the string `-1.34` gets parsed as `1.34` then the minus operator gets applied to flip the sign bit. So even with parsing strings, it works. For every positive and negative pair I tested, the 52 bit mantissa and 11 bit exponent were exactly the same. Only the bit 64 (i.e. the sign bit) changed. Disclaimer: I'm not any kind of IEEE754 expert. – Matthew Gunn Dec 02 '15 at 22:06
  • @MatthewGunn Same here. I tested a few million random numbers, and in all of them only bit 64 changed. But obviously it's not possible to test all numbers – Luis Mendo Dec 02 '15 at 22:37
  • 1
    For anyone wanting to play around, you can see the underlying binary representation with `reshape(flipud(dec2bin(typecast(x, 'uint8'),8))',1,64)`. Sign first, then 11bit exponent, then 52bit mantissa on my machine. – Matthew Gunn Dec 02 '15 at 22:43
  • 1
    @aka.nice The only time I see that rounding could be a factor is when assigning a double value from a literal with higher than 53-bit precision. Otherwise I agree that since each representable value has a unique representation, and each representation has a unique negative, once you have a value represented, there is no rounding involved in negation. – beaker Dec 03 '15 at 16:40
  • This is not an answer. But as a matter of practice, I would NEVER rely on a direct equality comparison/result when using any type of floating point number. Doing so would I think plant a potential time-bomb in code that might be very, very hard to find later on. I.e. even if the answer turns out to be "yes" you would I think be skating on thin ice to rely on that fact. – Ken Clement Dec 06 '15 at 04:27
  • @KenClement Yes, you are totally right on that. My question was more "philosophical" than anything: does one reduce the space of existing numbers if negative numbers are restrictied to be obtained from negating positive numbers only? Or does negating give the same result as if the positive number literal had originally had a minus sign attached? – Luis Mendo Dec 06 '15 at 18:09

1 Answers1

2

This question is computer architecture dependent. However, the sign of floating point numbers on modern architectures (including x64 and ARM cores) is represented by a single sign bit, and they have instructions to flip this bit (e.g. FCHS). That being the case, we can draw two conclusions:

  1. A change of sign can be achieved (and indeed is by modern compilers and architectures) by a single bit flip/instruction. This means that the process is completely invertible, and there is no loss of numerical accuracy.
  2. It would make no sense for MATLAB to do anything other than the fastest, most accurate thing, which is just to flip that bit.

That said, the only way to be sure would be to inspect the assembly code for uminus in your MATLAB installation. I don't know how to do this.

Community
  • 1
  • 1
user664303
  • 2,053
  • 3
  • 13
  • 30