Normally I don't just do someone's homework for them, but i wanted to challenge the idea that convergence should always be detected by comparing the error against a fixed epsilon value.
Keep in mind that the double
data type, can only represent values taken from a finite range, so comparing for equivalence between two results can be used to detect convergence, where doing so with arbitrary precision could lead to infinite iteration
Also, a set of possible floating-point values is distributed so that the difference between one value and the next greater is smaller for small values than large values, so if you use an epsilon, it usually isn't appropriate to keep it fixed at some arbitrarily "small" value.
The code below performs the same dumb binary search, but the exit condition tests for a re-visiting of a previous iteration's result, which means that further iteration would just repeatedly cover the same search states forever and that the error is close to the minimum possible.
In this way, epsilon is automatically determined by the double
data type itself, along with the input. If you replaced double
with some kind of arbitrary-precision class, then you would indeed need to supply an epsilon, otherwise irrational roots might loop until some sort of failure, like an out-of-memory condition.
#include <iostream>
#include <iomanip>
double mysqrt(double x) {
double low, high;
if(x < 1) {
if(x <= 0) return 0;
low = x;
high = 1;
} else {
low = 1;
high = x;
}
for(;;) {
const double mid = (low + high) / 2;
if(high == mid || low == mid) return mid;
if(mid * mid > x) {
high = mid;
} else {
low = mid;
}
}
}
int main() {
std::cout << std::setprecision(14) << mysqrt(3) << '\n';
}