I wrote a short program in C to perform linear interpolation, which iterates to give a root of a function to a given number of decimal points. So far, for a function f(x):
long double f(long double x) {
return (pow(x, 3) + 2 * x - 8);
}
The program fails to converge to a 1dp value. The program updates variables a and b, between which the root of f(x) lies, until a and b both round to the same number at a given precision. Using long doubles and the above function, a debugger shows for the first 2 iterations:
a = 1.5555555555555556
a = 1.6444444444444444
though it should have been:
a = 1.5555555555555556
a = 1.653104925053533
The program fails to update values after this. The equation for linear interpolation I'm using is a rearranged version of the mathematical one given here and the code I'm using is a C-version of a python program I wrote. Why does the C implementation get different values, despite the same algorithm, and how can I fix it?
OK I'm still getting the hang of this, but hopefully below I have a Minimal, Complete, and Verifiable example:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
long double a; long double b; long double c; // The values for interpolation
long double fofa; long double fofb; long double fofc; // The values f(a), f(b) and f(c)
const int dp = 1; // The number of decimal places to be accurate to
long double f(long double x) {
return (pow(x, 3) + 2 * x - 8);
}
int main(void) {
a = 1; b = 2;
while(roundf(a * pow(10, dp))/pow(10, dp) != roundf(b * pow(10, dp))/pow(10, dp)) { // While a and b don't round to the same number...
fofa = f(a); fofb = f(b); // Resolve the functions
printf("So f(a) = %g, f(b) = %g\n", (double)fofa, (double)fofb); // Print the results
c = (b * abs(fofa) + a * abs(fofb)) / (abs(fofb) + abs(fofa)); // Linear Interpolation
fofc = f(c);
if(fofc < 0) {
a = c;
}
else if(fofc == 0) {
a = c;
break;
}
else {
b = c;
}
}
printf("The root is %g, to %d decimal places, after %d iterations.\n", (double)a, dp, i);
}