1

I'm trying to understand for which reason python shows -0.0 instead of 0 in some cases.

E.g. you can try in a notebook:

x = -1.0 * 0 
x # Output: -0.0 

In the beginning I thought that it would be a way to represent a value arbitrarily close to 0, but negative.

However, the value is actually non-negative:

x >= 0 # Output: True 
x < 0 # Output: False

Can anybody explain this fact?

  • 1
    It is not a "python" thing. It is as such according to the [IEEE 754 double-precision binary floating-point format standard](https://en.wikipedia.org/wiki/Double-precision_floating-point_format). For the purpose of signed zero, see [here](https://en.wikipedia.org/wiki/Signed_zero#:~:text=The%20concept%20of%20negative%20zero,computing%20with%20complex%20elementary%20functions.) – MMZK1526 Aug 10 '22 at 09:30
  • 1
    See also [Meaning of "-0.0" in Python](https://scicomp.stackexchange.com/questions/38845/meaning-of-0-0-in-python) – Ocaso Protal Aug 10 '22 at 09:30
  • it is 0. x > 0 is false, x < 0 is false, x == 0 is true. which is what to expect from -1.0 * 0? – Christian Sloper Aug 10 '22 at 09:31

0 Answers0