In a numerical solver I am working on in C, I need to invert a 2x2 matrix and it then gets multiplied on the right side by another matrix:
C = B . inv(A)
I have been using the following definition of an inverted 2x2 matrix:
a = A[0][0];
b = A[0][1];
c = A[1][0];
d = A[1][1];
invA[0][0] = d/(a*d-b*c);
invA[0][1] = -b/(a*d-b*c);
invA[1][0] = -c/(a*d-b*c);
invA[1][1] = a/(a*d-b*c);
In the first few iterations of my solver this seems to give the correct answers, however, after a few steps things start to grow and eventually explode.
Now, comparing to an implementation using SciPy, I found that the same math does not explode. The only difference I can find is that the SciPy code uses scipy.linalg.inv()
, which internally uses LAPACK internally to perform the inversion.
When I replace the call to inv()
with the above calculations the Python version does explode, so I'm pretty sure this is the problem. Small differences in the calculations are creeping in, which leads me to believe it is a numerical problem--not entirely surprising for an inversion operation.
I am using double-precision floats (64-bit), hoping that numerical issues would not be a problem, but apparently that is not the case.
But: I would like to solve this in my C code without needing to call out to a library like LAPACK, because the whole reason for porting it to pure C is to get it running on a target system. Moreover, I'd like to understand the problem, not just call out to a black box. Eventually I'd like to it run with single-precision too, if possible.
So, my question is, for such a small matrix, is there a numerically more stable way to calculate the inverse of A?
Thanks.
Edit: Currently trying to figure out if I can just avoid the inversion by solving for C
.