5

from a simulation problem, I want to calculate complex square matrices on the order of 1000x1000 in MATLAB. Since the values refer to those of Bessel functions, the matrices are not at all sparse.

Since I am interested in the change of the determinant with respect to some parameter (the energy of a searched eigenfunction in my case), I overcome the problem at the moment by first searching a rescaling factor for the studied range and then calculate the determinants,

result(k) = det(pre_factor*Matrix{k});

Now this is a very awkward solution and only works for matrix dimensions of, say, maximum 500x500.

Does anybody know a nice solution to the problem? Interfacing to Mathematica might work in principle but I have my doubts concerning feasibility. Thank you in advance

Robert

Edit: I did not find a convient solution to the calculation problem since this would require changing to a higher precision. Instead, I used that

ln det M = trace ln M

which is, when I derive it with respect to k

A = trace(inv(M(k))*dM/dk)

So I at least had the change of the logarithm of the determinant with respect to k. From the physical background of the problem I could derive constraints on A which in the end gave me a workaround valid for my problem. Unfortunately I do not know if such a workaround could be generalized.

Robert Filter
  • 1,661
  • 3
  • 12
  • 14
  • 1
    Is there anything special about the structure of your matrix that might help? You mention k = {A1, B1; A2, B2} where A1, A2 are symmetric. Anything else? Do A1 and A2 commute, or are any of the submatricies easily invertable? – Jonathan Dursi Dec 03 '10 at 18:24
  • @Jonathan Dursi: thank you for your thoughtful questions. But I am afraid that in general I do not see a reason why A1 and A2 should commute as well as why any of the submatricies should be eaily invertible. In addition, since the interesting cases are near det(M) = 0, inversion do not work out very well. – Robert Filter Dec 03 '10 at 20:33
  • The matrix being singular doesn't necessarily say anything interesting about the submatricies; if B1 and B2 were zero, M wouldn't be invertible even though A1 and A2 could be very well behaved. – Jonathan Dursi Dec 03 '10 at 22:29
  • @Jonathan Dursi: I guess, I see the point. Since all involved matrices are fully populated, there should be no reason why the inversion should be computationally easy. – Robert Filter Dec 03 '10 at 23:16

4 Answers4

5

You should realize that when you multiply a matrix by a constant k, then you scale the determinant of the matrix by k^n, where n is the dimension of the matrix. So for n = 1000, and k = 2, you scale the determinant by

>> 2^1000
ans =
     1.07150860718627e+301

This is of course a huge number, so you might expect that it should fail, since in double precision, MATLAB will only represent floating point numbers as large as realmax.

>> realmax
ans =
     1.79769313486232e+308

There is no need to do all the work of recomputing that determinant, not that computing the determinant of a huge matrix like that is a terribly well-posed problem anyway.

  • By definition, determinant is the only alternating *multilinear* map from Mn(K) to K such that F(Id)=1. – Wok Dec 03 '10 at 10:53
  • Thank you very much for your answer. The determinants of the studied matrices are indeed very low, reaching 1.e-300 at N ~ 200 what made the rescaling necessary. Do you have any suggestion to overcome the limitations of the double precision of MATLAB like using Mathematica? – Robert Filter Dec 03 '10 at 11:00
  • Why do any rescaling? In fact, why use the determinant itself at all? Work with the log of the determinant. Then you never need to bother with an arbitrary rescaling just to keep the determinant inside the dynamic range of floating point arithmetic. The log of the determinant is easily obtained from the logs of the diagonal elements of U, as returned from the LU factorization of your matrix. Beware any negative numbers on that diagonal, but they are easily handled. –  Dec 04 '10 at 00:39
  • Thank you for the suggestion. I just tried this ansatz but it turned out that tr(logm M) and det(M) yield the same results as they should; NaN for huge matrices. I will further try to go to long double range with a MATLAB toolbox or a C/C++ workaround and report here as soon as I find some solution. – Robert Filter Dec 05 '10 at 21:58
4

If speed is not a concern, you may want to use det(e^A) = e^(tr A) and take as A some scaling constant times your matrix (so that A - I has spectral radius less than one).

EDIT: In MatLab, the log of a matrix (logm) is calculated via trigonalization. So it is better for you to compute the eigenvalues of your matrix and multiply them (or better, add their logarithm). You did not specify whether your matrix was symmetric or not: if it is, finding eigenvalues are easier than if it is not.

Alexandre C.
  • 55,948
  • 11
  • 128
  • 197
  • Thank you very much for your ansatz. I already tried this one and since I am searching for local minima, used ln(detM) = tr(lnM) and the corresponding expressions for its derivative but as you point out this is computationaly extremely slow. – Robert Filter Dec 03 '10 at 11:51
  • I was using the identity given by you to workaround the problem, so I decided to accept your answer :) – Robert Filter Dec 10 '10 at 22:11
1

You said the current value of the determinant is about 10^-300.

Are you trying to get the determinant at a certain value, say 1? If so, rescaling is awkward: the matrix you are considering is ill-conditioned, and, considering the precision of the machine, you should consider the output determinant to be zero. It is impossible to get a reliable inverse in other words.

I would suggest to modify the columns or lines of the matrix rather than rescale it.


I used R to make a small test with a random matrix (random normal values), it seems the determinant should be clearly non-zero.

> n=100
> M=matrix(rnorm(n**2),n,n)
> det(M)
[1] -1.977380e+77
> kappa(M)
[1] 2318.188
Wok
  • 4,956
  • 7
  • 42
  • 64
  • @wok: Thank you for your answer. Indeed, the matrix might be ill-conditioned since for the problem studied, I am searching for parameters where this matrix has vanishing determinant, hence at least one eigenvalue should vanish. Do you think that looking at the determinants is in this meaning not at all justified? – Robert Filter Dec 03 '10 at 11:58
  • By "vanishing", do you mean being equal to zero? Has the matrix any specific property? – Wok Dec 03 '10 at 12:56
  • @wok: Yes, I mean det(M) = 0 which is in my case a resonance condition of the system. The Matrix has the form M = {A1 B1; A2 B2}. The A's are symmetric, the B's not. Sincerely. – Robert Filter Dec 03 '10 at 15:39
  • Not sure I could find a better way to do it. I wonder whether you can happen to have a clearly non-zero determinant with the matrices you consider. – Wok Dec 03 '10 at 16:42
  • @wok: For lower orders of the matrices, I get the correct values (the absolute values of the determinants corresponding to resonances are some orders of magnitude smaller than for non-resonant cases) so I am assuming that by increasing "resolution" I will not get a fundamentally different behaviour. – Robert Filter Dec 03 '10 at 20:04
  • I understand "For lower orders of the matrices" means small matrix dimension (n<1000), and "resolution" means the scaling factor. I believe you have numerical errors here and you cannot draw conclusion from numbers about 10^-300. – Wok Dec 03 '10 at 20:29
  • You may do SVD and infer whether there is resonance based on the singular values. – Wok Dec 04 '10 at 08:57
  • @wok: Thank you for your further suggestion. I will have to think about it since I do not have any experience in it. – Robert Filter Dec 05 '10 at 21:59
  • [SVD](http://en.wikipedia.org/wiki/Singular_value_decomposition) is a well-known matrix factorization. With Matlab, try `s = svd(X)` to just get the list of [singular values](http://www.mathworks.com/help/techdoc/ref/svd.html). The smaller the singular values, the "more singular" the matrix. – Wok Dec 05 '10 at 22:11
1

This is not strictly a matlab solution, but you might want to consider using Mahout. It's specifically designed for large-scale linear algebra. (1000x1000 is no problem for the scales it's used to.)

You would call into java to pass data to/from Mahout.

Xodarap
  • 11,581
  • 11
  • 56
  • 94
  • Thank you for your suggestion. The Mahout project indeed looks promising. Certainly, it would be worth a try. But to be honest, my knowledge of distributed computer architecture (Apache Hadoop as I just encountered) is very limited such that I think I would need too much time to set up an environment. – Robert Filter Dec 03 '10 at 20:25
  • @Robert: Hadoop can run across many nodes, but it doesn't need to. It took me about 5 min to set it up to run on my machine. Matlab also has a [parallel computing toolbox](http://www.mathworks.com/products/parallel-computing/), if you wish to speed up matlab (though I don't think it is any more precise). I have never tried it though, so I cannot testify as to its effectiveness. – Xodarap Dec 03 '10 at 20:47
  • Thanks for your encouragement. I will check with our administrator if it would be possible to install it on some nodes. Nevertheless, do you think Mahout can do more than double precision which I guess would be required by the given problem? I was searching a little but could not find information on it (e.g. Doc-Page down atm). Sincerely – Robert Filter Dec 03 '10 at 21:44
  • @Robert: good point, it doesn't look like Mahout does. Maybe try [this](http://commons.apache.org/math//api-1.2/org/apache/commons/math/linear/BigMatrix.html)? [BigDecimal](http://download.oracle.com/javase/1.5.0/docs/api/java/math/BigDecimal.html) is arbitrary precision. – Xodarap Dec 03 '10 at 22:12
  • Thank you for your links. I also googled a little and found that there are a lot of libraries which can handle long double (which might be sufficient here) or even arbitrary precision. I will post my results if I can find a solution. – Robert Filter Dec 05 '10 at 22:01