why is there a negative 0.000?
and why is it not equal to 0.000?
why is there a negative 0.000?
and why is it not equal to 0.000?
The concept of positive and negative zero is a feature of IEEE 754 floating point formats. There is no requirement that a C implementation will use IEEE floating point formats although, in practice, it is reasonably common.
Even with C implementations that use IEEE floating point formats, positive and negative zero will always compare equal, when using numerical comparisons (such as the ==
operator). The only way to detect or work with positive and negative zeros on such implementations is to use functions like copysign()
and signbit()
(declared in <math.h>
).
why is there a negative 0.000?
This may help a bit
What operations and functions on +0.0 and -0.0 give different arithmetic results?
why is it (negative 0.000) not equal to 0.000?
This is a common mis-conclusion. Reveiw the following:
double a = 1e-10;
double b = -2e-20;
printf("a:%f b:%f\n", a, b);
printf("a==b? %d\n", a == b);
printf("a:%e b:%e\n", a, b);
double z = 0.0;
double nz = -0.0;
printf("n:%f nz:%f\n", z, nz);
printf("n==nz? %d\n", z == nz);
Output
a:0.000000 b:-0.000000 // a and b appear to be 0.0 and -0.0
a==b? 0 // not same value
a:1.000000e-10 b:-2.000000e-20
n:0.000000 nz:-0.000000 // n and nz appear to be 0.0 and -0.0
n==nz? 1 // same value