3

Hi I am using the g++ compiler and am experiencing (what I think) is underflow of doubles, is this possible and if so how is the behaviour defined

I have uploaded the csv format of the covariance matrix (51x51) here: http://pastebin.com/r0fx1qsx

This is the code(in c++ requires boost) I am using to calculate the determinant ( I have since switched to long doubles and has not had an effect):

int determinant_sign(const boost::numeric::ublas::permutation_matrix<std ::size_t>& pm)
{
    int pm_sign=1;
    std::size_t size = pm.size();
    for (std::size_t i = 0; i < size; ++i)
        if (i != pm(i))
            pm_sign *= -1.0; // swap_rows would swap a pair of rows here, so we change sign
    return pm_sign;
}

long double determinant( boost::numeric::ublas::matrix<long double>& m ) {
    boost::numeric::ublas::permutation_matrix<std ::size_t> pm(m.size1());
    long double det = 1.0;

    if( boost::numeric::ublas::lu_factorize(m,pm) ) {
        det = 0.0;
    } else {
        for(int i = 0; i < (int)m.size1(); i++)
            det *= m(i,i); // multiply by elements on diagonal
        det = det * determinant_sign( pm );
    }
    return det;
}

The result I am given for the data is -3.59916e-183.

When I run the following matlab code:

M = csvread('path/to/csv');
det(M)
the result I get is:

4.2014e-173

As you can see one is (slightly) positive whereas one is (slightly) negative

Aly
  • 15,865
  • 47
  • 119
  • 191
  • 1
    Can you show us the code? – David G Feb 19 '13 at 15:46
  • well, I am calculating the determinant of a large matrix which has very small numbers. When done in matlab I am getting large result (~e128) but in c++ using doubles I am getting negative results – Aly Feb 19 '13 at 15:47
  • 2
    Please show code, adding to a double will never make it smaller, so you must be doing something else. – PlasmaHH Feb 19 '13 at 15:47
  • Also, 1E+128 won't cause overflow; doubles go up to 1E+308. – Mr Lister Feb 19 '13 at 15:49
  • @MrLister great, I will upload the covariance matrix in csv in a moment and the code used to calculate the determinent – Aly Feb 19 '13 at 15:50
  • Updated question to include link to covariance matrix, plus code I am using – Aly Feb 19 '13 at 15:58
  • 2
    That number is so small, it could be simply just zero, with some calculation error! – Shahbaz Feb 19 '13 at 16:09
  • 2
    Did you try computing it with a number type provided by mpfr and a large precision? Computing it with Eigen, trying several of the available algorithms? Sounds like a simple numerical stability issue to me. – Marc Glisse Feb 19 '13 at 16:50
  • You could also use an interval number type (there is one in boost) to see how big the error is. – Marc Glisse Feb 19 '13 at 16:52
  • @MarcGlisse what is mpfr? – Aly Feb 19 '13 at 16:56
  • @Aly google? (arbitrary precision floating point type) – Marc Glisse Feb 19 '13 at 17:05
  • @MarcGlisse my appologies, the end of the day has brought me to the height of laziness :( – Aly Feb 19 '13 at 17:27

1 Answers1

4

Assuming the floating point unit is operating correctly, it will not overflow to negative values - the result will be "+INF" (positive Infinity) if the value is out of the valid range. This can only happen to signed integers.

It is of course entirely possible to have various errors in a calculation that gives a negative answer when a positive one is expected.

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227
  • Shouldn't this technically be +HUGE_VAL, which may be +INFINITY or +DBL_MAX, depending upon the floating-point standard supported by the target build-environment? – Charles L Wilcox Feb 19 '13 at 16:19
  • I was using the IEEE value of +INF, which is the most common format for floating point. But yes, I'm sure there is a constant in some C/C++ header file. I'm not entirely sure if that makes a big difference to the meaning of the answer tho. – Mats Petersson Feb 19 '13 at 16:20
  • That's not to say that you can't ever get unexpected negative values from floating-point computations (http://www.exploringbinary.com/the-answer-is-one-unless-you-use-floating-point/ ) – Rick Regan Feb 20 '13 at 16:23
  • Yes, that's what I was trying to say with "it is of course entirely possible to various errors in a calculation ..." [in the example given in the link, it's "intentional", but it's largely caused by the fact that 0.1 can not be expressed exactly as a binary number, so the result of the multiplication by 0.1 is ever so slightly off, and when you then start using very large numbers, the rounding effect can cause the result to be negative. – Mats Petersson Feb 20 '13 at 16:37
  • @Mats Petersson I don't know how I misread that line the first time, but yes, I basically just repeated what you said :) – Rick Regan Feb 21 '13 at 17:04