I'm trying to understand for which reason python shows -0.0 instead of 0 in some cases.
E.g. you can try in a notebook:
x = -1.0 * 0
x # Output: -0.0
In the beginning I thought that it would be a way to represent a value arbitrarily close to 0, but negative.
However, the value is actually non-negative:
x >= 0 # Output: True
x < 0 # Output: False
Can anybody explain this fact?