I'm using Math.Net to calculate the determinate of basic matrices for a larger project. As part of my unit tests I was having issues that converting a float
to an int
was different when debugging the test. It seems attaching the debugger produces different results to runnign without it.
using System;
using MathNet.Numerics.LinearAlgebra;
public class Test
{
public static void Main(string[] args)
{
Matrix<float> a = Matrix<float>.Build.Dense(2, 2, new float[] { 10, 19, 25, 10 });
Console.WriteLine("Det {0}", a.Determinant());
Console.WriteLine("Det Int {0}", (int)a.Determinant());
Console.WriteLine("Det Trunc {0}", Math.Truncate(a.Determinant()));
Console.WriteLine("Det Floor {0}", Math.Floor(a.Determinant()));
}
}
When running the test normally I get the following results:
Det -375
Det Int -374
Det Trunc -374
Det Floor -375
however when debugging the code I get this:
Det -375
Det Int **-375**
Det Trunc **-375**
Det Floor **-376**
I've read through this question but it doesn't seem to address my problem. Running in release mode rather than debug mode has no effect on the results, it seems to only be related to having the debugger attached.
What is going on and how can I reduce the likelihood of this error?
Edit:
Using Matrix<double>
rather than Matrix<float>
produces the correct answers. I'm guessing there is some sort of precision error but I don't understand why.