63

I encountered negative zero in output from python; it's created for example as follows:

k = 0.0
print(-k)

The output will be -0.0.

However, when I compare the -k to 0.0 for equality, it yields True. Is there any difference between 0.0 and -0.0 (I don't care that they presumably have different internal representation; I only care about their behavior in a program.) Is there any hidden traps I should be aware of?

wjandrea
  • 28,235
  • 9
  • 60
  • 81
max
  • 49,282
  • 56
  • 208
  • 355
  • It does not give negative value with python 2.5.4 – Ankit Jaiswal Nov 03 '10 at 10:51
  • 1
    The real hidden trap is when you start testing for equality with floating point values. They're inexact and prone to weird round-off discrepancies. – Sean McSomething Nov 03 '10 at 21:52
  • But it does print negative value on Python 2.7.1. – syntagma Mar 04 '13 at 21:30
  • 3
    This problem came up in a real life gps application; longitude just slightly west of the meridian was being reported as zero degrees and x minutes, when it should have been minus zero degrees and x minutes. But python can't represent integer negative zero. – secret squirrel Sep 01 '16 at 09:20

7 Answers7

48

Check out −0 (number) in Wikipedia

Basically IEEE does actually define a negative zero.

And by this definition for all purposes:

-0.0 == +0.0 == 0

I agree with aaronasterling that -0.0 and +0.0 are different objects. Making them equal (equality operator) makes sure that subtle bugs are not introduced in the code.
Think of a * b == c * d

>>> a = 3.4
>>> b =4.4
>>> c = -0.0
>>> d = +0.0
>>> a*c
-0.0
>>> b*d
0.0
>>> a*c == b*d
True
>>> 

[Edit: More info based on comments]

When I said for all practical purposes, I had chosen the word rather hastily. I meant standard equality comparison.

As the reference says, the IEEE standard defines comparison so that +0 = -0, rather than -0 < +0. Although it would be possible always to ignore the sign of zero, the IEEE standard does not do so. When a multiplication or division involves a signed zero, the usual sign rules apply in computing the sign of the answer.

Operations like divmod and atan2 exhibit this behavior. In fact, atan2 complies with the IEEE definition as does the underlying "C" lib.

>>> divmod(-0.0,100)
(-0.0, 0.0)
>>> divmod(+0.0,100)
(0.0, 0.0)

>>> math.atan2(0.0, 0.0) == math.atan2(-0.0, 0.0)
True 
>>> math.atan2(0.0, -0.0) == math.atan2(-0.0, -0.0)
False

One way is to find out through the documentation, if the implementation complies with IEEE behavior . It also seems from the discussion that there are subtle platform variations too.

However this aspect (IEEE definition compliance) has not been respected everywhere. See the rejection of PEP 754 due to disinterest! I am not sure if this was picked up later.

See also What Every Computer Scientist Should Know About Floating-Point Arithmetic.

john-hen
  • 4,410
  • 2
  • 23
  • 40
pyfunc
  • 65,343
  • 15
  • 148
  • 136
  • @aaronasterling: Why did you remove your answer? Thats was a valuable addition to information here. I just upvoted it. – pyfunc Nov 03 '10 at 02:49
  • because I was wrong about the last part of it and the rest of it wasn't really unique to my post. – aaronasterling Nov 03 '10 at 04:05
  • If it's "equal for all purposes" how does it explain the difference in `atan2` in Craig McQueen's answer? I agree that it returns True when compared for equality, but if the two numbers' behavior may varies, I would like to know when. – max Nov 03 '10 at 18:41
  • @max Note that the arctangent function is basically looking for the slope (and direction) of the provided arguments, so internally it's dividing by zero leading to discontinuities that should not be surprising. Furthermore, the function output is cyclic with a period of 2π, +π and -π are the "same". – Nick T Mar 10 '14 at 21:19
22

math.copysign() treats -0.0 and +0.0 differently, unless you are running Python on a weird platform:

math.copysign(x, y)
     Return x with the sign of y. On a platform that supports signed zeros, copysign(1.0, -0.0) returns -1.0.

>>> import math
>>> math.copysign(1, -0.0)
-1.0
>>> math.copysign(1, 0.0)
1.0
Azat Ibrakov
  • 9,998
  • 9
  • 38
  • 50
Alex Trebek
  • 897
  • 6
  • 13
17

It makes a difference in the atan2() function (at least, in some implementations). In my Python 3.1 and 3.2 on Windows (which is based on the underlying C implementation, according to the note CPython implementation detail near the bottom of the Python math module documentation):

>>> import math
>>> math.atan2(0.0, 0.0)
0.0
>>> math.atan2(-0.0, 0.0)
-0.0
>>> math.atan2(0.0, -0.0)
3.141592653589793
>>> math.atan2(-0.0, -0.0)
-3.141592653589793
Craig McQueen
  • 41,871
  • 30
  • 130
  • 181
15

Yes, there is a difference between 0.0 and -0.0 (though Python won't let me reproduce it :-P). If you divide a positive number by 0.0, you get positive infinity; if you divide that same number by -0.0 you get negative infinity.

Beyond that, though, there is no practical difference between the two values.

C. K. Young
  • 219,335
  • 46
  • 382
  • 435
  • 4
    You can't divide by 0. If you're talking about talking limits, -0 makes even less sense. – Falmarri Nov 03 '10 at 01:04
  • 4
    -1 You can't divide a number 0 since you get a ZeroDivisonError. That means that there is no difference. – Dominic K Nov 03 '10 at 01:04
  • 11
    @Falmarri: In Python, you can't; in other languages, you very well can. I was addressing the distinction between 0.0 and -0.0 in a general floating-point processing sense. – C. K. Young Nov 03 '10 at 01:04
  • 1
    @Chris- However, this question is about Python. Almost all languages throw an error anyways. – Dominic K Nov 03 '10 at 01:07
  • @DMan: "Almost" all languages? You surely jest. Try the following expressions in JavaScript and Ruby: `1/0.0`, `1/-0.0`. – C. K. Young Nov 03 '10 at 01:09
  • 7
    +1 to cancel out the downvotes. Chris is correct that, e.g., in C, floating point division by 0.0 is defined to produce infinity with the sign of (numerator and denominator have same sign) ? positive : negative. – AlcubierreDrive Nov 03 '10 at 01:10
  • @Chris- Those are only two languages. Let me stand by C# and Python. – Dominic K Nov 03 '10 at 01:13
  • 1
    @DMan: The point of my post is this: support for infinity is built in to most floating-point processors. Disallowing division by zero doesn't come "for free", but must be designed into the language. So, for better or worse, many languages just don't bother. Also, thank goodness, C# and Python are not "almost all languages", nor even close to it. – C. K. Young Nov 03 '10 at 01:15
  • @Chris- I'm not saying that infinity isn't built into most floating-point processor, but rather, many languages tend to throw an exception because there really isn't any point of 1)Dividing by zero and 2)Using positive/negative infinity anyways. And thank goodness, Javascript especially isn't a representation of all languages. I definitely wouldn't be programming if it was. – Dominic K Nov 03 '10 at 01:17
  • 3
    You're forgetting that `**` has higher precedence than `-`. `(-0.0) ** 0` gives `1.0`. – dan04 Nov 03 '10 at 01:30
  • 1
    @dan04: Point. (+1) I'll revert my post, then, and let the infinities stand on their own infinite feet. ;-) – C. K. Young Nov 03 '10 at 01:31
  • @Chris - I guess I was talking mathamatically, not what languages do with floating point numbers. I would undownvote, but you have +4 so it evens out – Falmarri Nov 03 '10 at 06:02
  • 1
    @DMan: The standard for floating point allows these things. "there really isn't any point" doesn't mean anything, since they are defined and standardized. http://en.wikipedia.org/wiki/−0_(number) The standard exists, is implemented, and defines these things clearly and completely. – S.Lott Nov 03 '10 at 10:15
  • @S.Lott- Please tell me a practical use of dividing by zero – Dominic K Nov 03 '10 at 23:45
  • @S.Lott- Like I said in my previous, previous comment, I understand that it's built in. Many languages decide to throw the exception because there is no practical use of dividing by zero and is most likely a fault. Please read my comment again. – Dominic K Nov 04 '10 at 01:15
  • @S.Lott- Like I said in my previous comment, I said I understood it's built in (and thus exists). So I have never denied that it exists. I actually think you're arguing about an entirely different matter. My argument was that many languages throw an exception on dividing by zero even if it is possible because it is not what you intended. It seems like you're getting off topic. – Dominic K Nov 04 '10 at 01:29
  • @S.Lott- "You can't divide a number 0 since you get a ZeroDivisonError. That means that there is no difference". True. Since you can't divide by zero, there is no visible difference. "there really isn't any point of 1)Dividing by zero and 2)Using positive/negative infinity". True. Please write a useful piece of code for me that divides by zero and uses positive infinity. You won't. Standard defined or not, they are simply not useful. That's all. – Dominic K Nov 04 '10 at 01:42
  • @S.Lott- Understanding after the third time is useful as well. – Dominic K Nov 04 '10 at 02:11
  • @S.Lott- I also understand that the standard defines the positive/negative infinity. I just don't understand why that's important. – Dominic K Nov 04 '10 at 19:30
  • 4
    @DMan: It's important that (a) they exist and (b) there's an implementation. (Even if it's partial.) Because you (and I) don't see the complex mathematical subtleties doesn't mean anything. They still exist. I don't understand partial differential equations, and see no practical value. Some people do. I see limited practical value in the standard. That's not the point. My humble opinion on "practical" has no merit. It still exists, and it still has meaning, and it's still partially implemented. – S.Lott Nov 04 '10 at 21:10
  • @S.Lott- I don't understand what you are getting at. – Dominic K Nov 04 '10 at 21:17
  • @S.Lott- Thank you for elaborating. Since this is getting to be a pretty long line of comments, I'll just say that I'll keep my perspective, and you can keep yours. – Dominic K Nov 04 '10 at 21:27
  • 2
    @DMan You want division by ±0.0 because the float +0.0 doesn't represent mathematical 0. It represents a range of numbers close to 0. Arithmetic underflow is one way you can get the value +0.0 without mathematically reaching 0. – leewz Apr 23 '18 at 01:03
  • 2
    I happened to have an unread tab open about the purpose of signed zeroes: https://softwareengineering.stackexchange.com/questions/280648/why-is-negative-zero-important – leewz Apr 23 '18 at 01:09
2

If you are ever concerned about running into a -0.0 condition, just add + 0. to the equation. It does not influence the results but forces the zeros to a positive float.

import math

math.atan2(-0.0, 0.0)
Out[2]: -0.0

math.atan2(-0.0, 0.0) + 0.
Out[3]: 0.0
bfree67
  • 669
  • 7
  • 6
1

Same values, yet different numbers

>>> Decimal('0').compare(Decimal('-0'))        # Compare value
Decimal('0')                                   # Represents equality

>>> Decimal('0').compare_total(Decimal('-0'))  # Compare using abstract representation
Decimal('1')                                   # Represents a > b

Reference :
http://docs.python.org/2/library/decimal.html#decimal.Decimal.compare http://docs.python.org/2/library/decimal.html#decimal.Decimal.compare_total

user
  • 17,781
  • 20
  • 98
  • 124
0

To generalise or summarise the other answers, the difference in practice seems to come from calculating functions that are discontinued at 0 where the discontinuity is coming from a 0 division. Yet, python defines a 0 division as an error. So if anything is calculated with python operators, you can simply consider -0.0 as +0.0 and nothing to worry from. On the contrary, if the function is calculated by a built in function or a library that is written in another language, such as C, the 0 division may be defined otherwise in that language and may give different answers for -0.0 and 0.0.