18

I am trying to compute the eigenvalues, λ (lambda), of a damped structure with the following equations of motion:

(λ²M + λC + K) x = 0,

where M, C, and K are sparse matrices. Using MATLAB's polyeig function works, but I would like to go to larger systems and take advantage of the sparsity of my matrices. I have used a state space linearization to obtain a generalized eigenvalue problem as follows:

(A - λB) z = 0,

with

A = [K , 0 ; 0, -M],

B = [-C , -M ; -M, 0],

z = [x ; λx]

Solving this with MATLAB's eigs function:

lambda = eigs(A,B,10,'sm')

Produces the following output:

lambda =
   1.0e+03 *
  -0.2518 - 1.3138i
  -0.2518 + 1.3138i
  -0.4690 - 1.7360i
  -0.4690 + 1.7360i
  -0.4690 - 1.7360i
  -0.4690 + 1.7360i
  -0.5387 - 1.8352i
  -0.5387 + 1.8352i
      NaN +    NaNi
      NaN +    NaNi

The first eight eigenvalues are correct, but it seems as though the last two eigenvalues were not able to converge. Increasing the number of Lanczos basis vectors does not seem to improve the problem.

Strangely however, increasing the number of eigenvalues computed (k) allows more and more eigenvalues to converge:

  • k = 10: Number of lambdas converged = 8
  • k = 20: Number of lambdas converged = 8
  • k = 50: Number of lambdas converged = 8
  • k = 100: Number of lambdas converged = 20
  • k = 120: Number of lambdas converged = 80
  • k = 150: Number of lambdas converged = 150

It may also be worth mentioning that many of the eigenvalues that do not converge with lower values of k appear to be degenerate or at least very closely spaced.

I was wondering if anybody can think of an explanation for this behavior? If so, is there any way to make all of the eigenvalues converge without making k very large? Thank you!

Rody Oldenhuis
  • 37,726
  • 7
  • 50
  • 96
Dimitri K
  • 189
  • 3
  • 4
    Interesting problem and highly coherent question; I wish every single one was this well-formed! Welcome to Stack Overflow, and kudos:) – Andras Deak -- Слава Україні May 18 '17 at 20:00
  • 1
    How close to symmetric are `M`, `C`, and `K`? – gnovice May 18 '17 at 20:10
  • The matrices are all symmetric (theoretically as well as numerically). – Dimitri K May 18 '17 at 20:15
  • 1
    @DimitriK is this question still unsolved and relevant? I'm thinking of posting a small bounty on this question, but only in case that's useful to you. And in case you'll be able to respond to queries in the week while the bounty lasts. – Andras Deak -- Слава Україні Jun 02 '17 at 21:43
  • 1
    Sorry for the incomplete answer, but I can't comment yet. Your matrices A and B seem to be symmetric, so according to [this](http://www.netlib.org/lapack/lug/node54.html), Your eigenvalues should be real but the ones you computed are complex. Could you upload your code and matrices? – multigrid101 Dec 23 '17 at 12:16
  • I agree, it's difficult to say what is happening without looking at the matrices. Any way, this question is very interesting! – Sergio Nov 28 '18 at 13:27
  • Condition of infinity usually happens, when you have zero entries. You can directly solve that part to obtain a result. Its messy however, if you do not handle this automatically. – Jay-Pi Mar 01 '20 at 15:33
  • Sorry for the bad question, but does this [tool](https://www.mathworks.com/matlabcentral/fileexchange/48-lobpcg-m) satisfy your needs? – Jay-Pi Mar 01 '20 at 15:35
  • @multigrid101 i've seen this kind of problem before and in that case the matrices were complex and symmetric. A propery that is hard to exploit. – Thijs Steel Mar 18 '20 at 16:43
  • @DimitriK I know the performance of the QZ algorithm in LAPACK (and consequently the polyeig method) is quite poor. If B is invertible, would computing the dense eigencomposition of the linearized equation satisfy your needs? – Thijs Steel Mar 18 '20 at 16:49

1 Answers1

1

This is old, but still unanswered. Without the actual matrices it is difficult to be certain. This is my best guess:

eigs calls ARPACK routines. ARPACK exploits iterative methods (Arnoldi) to converge to, e.g., the eigenvalues with smallest magnitude (option sm). As for any iterative method, the user can specify options like the convergence Tolerance and the MaxIterations before the iterative process stops. The NaNs indicate eigenvalues that have not converged when MaxIterations is reached.

An important option for Arnoldi methods is the dimension of the Krylov subspace used to approximate the solutions. This can be specified by the option SubspaceDimension in eigs. The default value is max(2*k,20), so increasing k effectively increases the dimension of the Krylov subspace. If your problem requires a relatively large Krylov subspace for converging some eigenvalues to the desired Tolerance, this could explain why increasing k yields convergence of a larger number of eigenvalues.

To verify if my guess is correct you could either have a less restrictive Tolerance (e-6 may be sufficient?) or increase the value of SubspaceDimension while keeping k constant.

Luk
  • 36
  • 1