I'm trying to understand why computing determinant of a 3x3 matrix in this way gives broken results:
#include <iostream>
#include <array>
#include <vector>
int main(int argc, char const *argv[])
{
const int dim = 3;
std::vector<std::vector<float>> data;
data.resize(dim, std::vector<float>(dim, 0));
std::array<float, dim*dim> myArray = {
-1000.0, -1000.0, -1000.0,
-1000.0, -1000.0, -1000.0,
-1000.0, -1000.0, -999.0
};
for(int i = 0; i < dim; i++){
for(int j = 0; j < dim; j++){
data[i][j] = myArray[dim*i + j];
}
}
float det =
data[0][0] * data[1][1] * data[2][2] +
data[0][1] * data[1][2] * data[2][0] +
data[0][2] * data[2][1] * data[1][0] -
data[2][0] * data[1][1] * data[0][2] -
data[0][0] * data[1][2] * data[2][1] -
data[1][0] * data[0][1] * data[2][2];
float anotherDet = 0;
for(int i = 0; i < 2; i++){
anotherDet = anotherDet + 1000 * 1000 * 1000;
}
for(int i = 0; i < 2; i++){
anotherDet = anotherDet - 1000 * 1000 * 1000;
}
anotherDet = anotherDet - 1000 * 1000 * 999;
anotherDet = anotherDet + 1000 * 1000 * 999;
std::cout << det << " " << anotherDet << std::endl;
return 0;
}
The output of this code is -64 0
, whereas the real determinant of such matrix should be zero. My suspicion is that the error is related to the floating points precision, but I still don't get the logic for why such discrepancies happen.
I tried debugging, and indeed I found that there is a rounding error that gets carried on. -2999000060
is the value of det
in the middle of that large sum.
But if I try the following:
float det2 =
1000 * 1000 * 1000 +
1000 * 1000 * 1000 +
1000 * 1000 * 999 -
1000 * 1000 * 1000 -
1000 * 1000 * 1000 -
1000 * 1000 * 999;
It gives the correct value of 0, but only because its an arithmetic operation, and thus there are not rounding errors.
Edit: I understand now how operations with floats that big could carry some rounding error.