I've written the following code*:
function val = newton_backward_difference_interpolation(x,y,z)
% sample input : newton_backward_difference_interpolation([0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 086 0.88],[0.8423 0.8771 0.9131 0.9505 0.9893 1.0296 1.0717 1.1156 1.1616 1.2097],0.71)
% sample output:
% X-Input must be equidistant
% -1.1102e-16
n=numel(x);% number of elements in x (which is also equal to number of elements in y)
h=x(2)-x(1);% the constant difference
%error=1e-5;% the computer error in calculating the differences [It was workaround where I took abs(h-h1)>=error]
if n>=3% if total elements are more than 3, then we check if they are equidistant
for i=3:n
h1=x(i)-x(i-1);
if (h1~=h)
disp('X-Input must be equidistant');
disp(h1-h);
return;
end
end
end
...
I also wrote a function for calculating inverse of a matrix** where this:
a=[3 2 -5;1 -3 2;5 -1 4];disp(a^(-1)-inverse(a));
displays this:
1.0e-16 *
0 -0.2082 0.2776
0 0.5551 0
0 0.2776 0
- Why is there a small difference of 1e-16 occuring? Am I right in saying that this is because computer calculations (especially divisions are not exact)? Even if they are inexact, why does
a^(-1)
provide more accurate/exact solution (Sorry If this question is not programming related in your view.) - Can this be avoided/removed? How?
*for newton backward difference interpolation: (whole code at this place), the expected output is 0.85953 or something near that.
**using Gauss-Jordan Elimination: (whole code at this place)