1

As seen in this question, there is a difference between the results MKL gives, between serial and distributed execution. For that reason, I would like to study that error. From my book I have:

|ε_(x_c)| = |x - x_c| <= 1/2 * 10^(-d), where d specifies the decimal digits that are accurate, between the actual number, x and the number the computer has, x_c.

|ρ_(x_c)| = |x - x_c|/|x| <= 5 * 10^(-s) is the absolute relative error, where s specifies the number of significant digits.

So, we can write code like this:

double calc_error(double a,double x)
{
  return std::abs(x-a)/std::abs(a);
}

in order to compute the absolute error for example, as seen here.

Are there more types of errors to study, except from the absolute error and the absolute relative error?

Here are some of my data to play with:

serial gives:
-250207683.634793 -1353198687.861288 2816966067.598196 -144344843844.616425 323890119928.788757
distributed gives:
-250207683.634692 -1353198687.861386 2816966067.598891 -144344843844.617096 323890119928.788757

and then I can expand the idea(s) to the actual data and results.

Community
  • 1
  • 1
gsamaras
  • 71,951
  • 46
  • 188
  • 305
  • What are you asking exactly? I'm not sure I understand what you mean by "more in that direction". (edit: whoever downvoted, I think it might be better to discuss beforehand?) – Jonathan H Aug 18 '15 at 17:06
  • @Sh3ljohn I updated my question, you were right, it was not clear! That's why you downvoted? – gsamaras Aug 18 '15 at 17:09
  • I actually upvoted because I thought you didn't deserve a -1 without explanation... – Jonathan H Aug 18 '15 at 17:09
  • Oh thank you @Sh3ljohn, hope I did explain my cause well. I just hoped that the downvoter would actually say why the downvote, but that was just a dream! – gsamaras Aug 18 '15 at 17:10
  • Have you looked at the numerical analysis of this algorithm? One can place bounds on the accuracy - see for example Golub and Van Loan. – sfjac Aug 18 '15 at 17:11
  • This might be on topic for: http://scicomp.stackexchange.com/ – NathanOliver Aug 18 '15 at 17:13
  • The manual does not specify which algorithm is used @sfjac, so I thought I could that by taking a look at the actual data. – gsamaras Aug 18 '15 at 17:13
  • @NathanOliver I will take that into account next time, thanks! – gsamaras Aug 19 '15 at 07:04

2 Answers2

3

It doesn't get much more complicated than absolute and absolute relative errors. There is another method that compares integer-representations of floating-point formats, the idea being that you want your "tolerance" to adapt with the magnitude of the numbers you are comparing (specifically because there aren't "as many" numbers representable depending on the magnitude).

All in all, I think your question is very similar to floating-point comparison, for which there is this excellent guide, and this more exhaustive but much longer paper.

It might also be worth throwing in these for comparing floating point values:

#include <limits>
#include <cmath>

template <class T>
struct fp_equal_strict
{
    inline bool operator() ( const T& a, const T& b )
    {
        return std::abs(a - b) 
            <= std::max(
                std::numeric_limits<T>::min() * std::min( std::abs(a), std::abs(b) ),
                std::numeric_limits<T>::epsilon()
            );
    }
};

template <class T>
struct fp_equal_loose
{
    inline bool operator() ( const T& a, const T& b )
    {
        return std::abs(a - b) 
            <= std::max(
                std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) ),
                std::numeric_limits<T>::epsilon()
            );
    }
};

template <class T>
struct fp_greater
{
    inline bool operator() ( const T& a, const T& b )
    {
        return (a - b) >= std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) );
    }
};

template <class T>
struct fp_lesser
{
    inline bool operator() ( const T& a, const T& b )
    {
        return (b - a) >= std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) );
    }
};
Jonathan H
  • 7,591
  • 5
  • 47
  • 80
  • Your edit will be useful for comparing floating point numbers, not for computing the error, right? – gsamaras Aug 19 '15 at 08:54
  • 1
    Yes :) I'm basically saying not to use the usual relational operators with floating point numbers if you're doing numerical processing (the most sensitive one is probably equality comparison). That being said, you can also take the loose and strict equality "epsilons" above as a reference for what difference is "acceptable". – Jonathan H Aug 20 '15 at 09:34
2

I would mention that it is also possible to perform an ULPs (Units in the Last Place) comparison, which shows how far away two floating point numbers are in the binary representation. This is a nice indication of "closeness" since if two numbers are for example one ULP apart it means that there is no floating point number between them, so they are as close as possible in the binary representation without acutally being equal.

This method is descrbied here which is a more recent version than the article linked from the accepted answer by the same author. Sample code is also provided.

As an aside, but related to the context of your work (comparing sequential vs parallel floating point computations) it is important to note that floating point operations are not associative which means parallel implementations may not in general give the same result as sequential implementations. Even changing the compiler and optimisation options can actually lead to different results (e.g. GCC vs ICC, -O0 vs -O3).

An example algorithm on how to reduce the error computation for performing summation of floating point numbers can be found here and a comprehensive document by the author of that algorithm can be found here.

paul-g
  • 3,797
  • 2
  • 21
  • 36
  • Paul, thanks, nice answer, +1! Can you please post the relative code (I saw that the article has it as a function here), so that your answer will not be link-based? About the summation, I remember proving that in the uni. – gsamaras Aug 19 '15 at 11:55