0

I've written the following code*:

function val = newton_backward_difference_interpolation(x,y,z)
% sample input : newton_backward_difference_interpolation([0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 086 0.88],[0.8423 0.8771 0.9131 0.9505 0.9893 1.0296 1.0717 1.1156 1.1616 1.2097],0.71)
% sample output:
% X-Input must be equidistant
%   -1.1102e-16
n=numel(x);% number of elements in x (which is also equal to number of elements in y)
h=x(2)-x(1);% the constant difference
%error=1e-5;% the computer error in calculating the differences [It was workaround where I took abs(h-h1)>=error]
if n>=3% if total elements are more than 3, then we check if they are equidistant
    for i=3:n
        h1=x(i)-x(i-1);
        if (h1~=h)
            disp('X-Input must be equidistant');
            disp(h1-h);
            return;
        end
    end
end
...

I also wrote a function for calculating inverse of a matrix** where this:

a=[3 2 -5;1 -3 2;5 -1 4];disp(a^(-1)-inverse(a));

displays this:

  1.0e-16 *

         0   -0.2082    0.2776
         0    0.5551         0
         0    0.2776         0
  • Why is there a small difference of 1e-16 occuring? Am I right in saying that this is because computer calculations (especially divisions are not exact)? Even if they are inexact, why does a^(-1) provide more accurate/exact solution (Sorry If this question is not programming related in your view.)
  • Can this be avoided/removed? How?

*for newton backward difference interpolation: (whole code at this place), the expected output is 0.85953 or something near that.
**using Gauss-Jordan Elimination: (whole code at this place)

RE60K
  • 621
  • 1
  • 7
  • 26
  • "`Am I right in saying that this is because computer calculations (especially divisions are not exact)?`" --> yes. The two results use different algorithms, so floating-point errors accumulate in a different fashion. – Andras Deak -- Слава Україні Feb 02 '16 at 19:05
  • @AndrasDeak do you think my question is different, or should i close it? – RE60K Feb 02 '16 at 19:06
  • I don't think it's any different. Anyway, you have an answerer now, which changes everything. I guess you should let it stay, it will probably be closed as a duplicate, but that will allow Tobias to gain some rep. – Andras Deak -- Слава Україні Feb 02 '16 at 19:09
  • 1
    You can't *close* it, only *delete* it, and yes, that's why I said that you shouldn't delete it now that you have an answer. Anyway, the reason why it's punished to delete answered questions it because it's plain rudeness to ignore the efforts made by others on your question:) Being *closed* by other users as a duplicate doesn't have any negative implications on anyone. – Andras Deak -- Слава Україні Feb 02 '16 at 19:14

1 Answers1

1

That's in the nature of floating point calculations. There habe been many questions about this topic here already.
E.g.
Floating point inaccuracy examples
What range of numbers can be represented in a 16-, 32- and 64-bit IEEE-754 systems?
https://www.quora.com/Why-is-0-1+0-2-not-equal-to-0-3-in-most-programming-languages

The problem is that non-integer numbers in most cases have no exact representation by a 64bit double or 32bit float, not even in a 10000bit floating point value if that existed.
0.5, 0.25 and all powers of 2 (also those with negative exponents, 0.5 is 2^-1) can be stored exactly, and many others as well. Floating point numbers are saved as 3 separate parts: sign (1bit), mantissa (the 'number') and exponent for the base 2. Every possible combination of mantissa and exponent will result in a precise value, e.g. 0.5 has mantissa 'value' of 1 and exponent -1. the exact formula is
number = (sign ? -1:1) * 2^(exponent) * 1.(mantissa bits)
from
http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html
Try this formula and you'll see for example that you can't create a precise 0.1

Community
  • 1
  • 1
Tobias Knauss
  • 3,361
  • 1
  • 21
  • 45