I happened to come across a question here that mentioned trying to assign a float to -0.0
or something along those lines. However, from what I have read so far negative zero is the same as positive zero, so why not just have zero? Why do the two exist?
Asked
Active
Viewed 301 times
1

Andy
- 3,600
- 12
- 53
- 84
-
2http://en.m.wikipedia.org/wiki/Signed_zero – Oliver Charlesworth Sep 08 '13 at 10:18
-
1Google "negative zero", and you'll find lots of info, including plenty on this site. – NPE Sep 08 '13 at 10:18
-
@OliCharlesworth Thank you for the link. It was very nice and clear to read. Don't know how I didn't come across that – Andy Sep 08 '13 at 10:57
1 Answers
4
Each possible floating point valuue actually represents a small range of possible real world numbers (because there are only a finite number of possible floating point numbers but an infinite number of actual values). So 0.0 represents a value anywhere between 0.0 and a very small positive number, whereas -0.0 represents a value anywhere between 0.0 and a very small negative value.
Note however they when we compare 0.0 and -0.0 they are considered to be equal, even though the actual representation in bits is different.

Paul R
- 208,748
- 37
- 389
- 560
-
Thank you for your answer. I understand now, but what is the use of this? Also, just out of interest, how are the two represented in bits? – Andy Sep 08 '13 at 10:59
-
The "use" is mathematical - for some algorithms you need to preserve the sign of a value even when it becomes vanishingly small. 0.0 is 0x00000000, -0.0 is 0x80000000 - the only difference is the sign bit (bit 23). – Paul R Sep 08 '13 at 12:39