You are testing the conjunction of two conditions in your loop condition.
for (x=-10;x<10 && m*x+n==0; x+=0.01
For many inputs, the second condition will not be true, so the loop will terminate before the first iteration, causing a return value of -10
.
What you want is probably closer to something closer to the following. We need to test whether the absolute value is smaller than some EPSILON
for two reasons. One, double
is not precise. Two, you are doing an approximate solution anyways, so you would not expect an exact answer unless you happened to get lucky.
#define EPSILON 1E-2
double calc (double m,double n)
{
double x;
for (x=-10;x<10; x+=0.001)
{
if (abs(m*x+n) < EPSILON) return x;
}
// return a value outside the range to indicate that we failed to find a
// solution within range.
return -20;
}
Update: At the request of the OP, I will be more specific about what problem EPSILON
solves.
double
is not precise. In a computer, floating point number are usually represented by a fixed number of bits, with the bit representation usually being specified by a standard such as IEE 754. Because the number of bits is fixed and finite, you cannot represent arbitrary precision numbers. Let us consider an example in base 10 for ease of understanding, although you should understand that computers experience a similar problem in base 2.
If m = 1/3
, x = 3
, and n = -1
, we would expect that m*x + n == 0
. However, because 1/3
is the repeated decimal 0.33333...
and we can only represent a fixed number of them, the result of 3*0.33333
is actually 0.999999
, which is not equal to 1
. Therefore, m*x + n != 0
, and our check will fail. Thus, instead of checking for equality with zero, we must check whether the result is sufficiently close to zero, by comparing its absolute value with a small number we call EPSILON
. As one of the comments pointed out the correct value of EPSILON
for this particular purpose is std::numeric_limits::epsilon
, but the second issue requires a larger EPSILON
.
You are are only doing an approximate solution anyways. Since you are checking the values of x
at finitely small increments, there is a strong possibility that you will simply step over the root without ever landing on it exactly. Consider the equation 10000x + 1 = 0
. The correct solution is -0.0001
, but if you are taking steps of 0.001
, you will never actually try the value x = -0.0001
, so you could not possibly find the correct solution. For linear functions, we would expect that values of x
close to -0.0001
, such as x = 0
, will get us reasonably close to the correct solution, so we use EPSILON
as a fudge factor to work around the lack of precision in our method.