Eigs Function on Function Handle Not Converged

my little fun problem looks like:
F is some neat linear function handle. I need to find its largest real eigenvalues. (it basically looks like A*x where the construction of A involves unpleasant large matrix inversion and is huge (size of tens of thousands squared)
So I called eigs and it did not converge when specified to calculate 10 largestreal eigenvalues but converged when specified to calculate 10 smallestreal. I need both so any reason this might be it?
Or what is the threshold that determines convergence or not behind the curtains?

8 Comments

some says that eigs is not robust; somehow most of the times I get robust results depending on whether I specify largestreal of smallestreal; When does eigs decide that it wasn't going to give me a result?
Any number of things could be wrong with your Afun() operator. Have you verified that it is linear, for example?
Thanks for the comment ! I saw your comment 4 mins ago and I took these 4 mins and verified indeed the function was linear. Some other thoughts?
EIGS converge rather poorly when the eigenvalues are very close or they are multiples.
The other factor is the angle(s) between eigen subspaces, if they are close to 90 degrees, it's esier for eigs numerically to solve, and the opposite is naturally true (small angles -> more challenging).
Many be your largest eigen-values happen to be in the difficult case but not the other extreme of the spectrum.
@bruno
Thanks for the comment! Is it possible that the largestreal/smallestreal part is quite close to zero hence resulted in not converging?
I'm asking this question since it could be the case from first principle via analysis for the structure of Afun handle.
It could be. It depends on numerical methods used. Some method use deflation combined with inverse power, and the rate of convergence depends on the factors I mentioned earlier. Some methods also might be specific to the form of your matrix (symmetric, Hermitian, etc...) and might be use some more specific factorization property to ease the task. So if your operator is autoadjoint, don't forget to specify the corresponding flag to let EIGS select a best method.
Those are general comments, and it difficult to answer without knowing more about your linear operator and the exact methods used by EIGS.
Thanks for taking the time! I'd specify more about my linear operator but the structure isn't quite so simple (just an excuse for unable to wield more powerful mathematics like many wizards in the field do)... but nevertheless, thank you!
It doesn't matter what things you do behind the scene as long as it represents a linear operator.
Sometime you have to do some extra specific work to understand your operator, and might write your own eigs() to have a more suitable method.
In my youth I solved some big system of second order linearized Navier Stokes system with more 200k unknown and feed it through a Lanczos method to pull out the eigen mode of the system in order study the sensitivity of environmental global climate and it works just fine, and that was 30 year ago, and no one has cared this problem at the time.

Sign in to comment.

 Accepted Answer

Hi Sam,
The most likely reason for the convergence problems is that the eigenvalues are close together (or even multiples). The convergence speed of the internal method (for the 'largestabs' case) depends on the ratio between the smallest chosen eigenvalue and the largest eigenvalue that was not chosen.
Because of this, increasing the number of eigenvalues you are asking for (or just the 'SubspaceDimension') can help with convergence.
Another factor is that the 'largestreal' and 'smallestreal' options have some issues compared to 'largestabs' and 'smallestabs'. The inner iteration tends to go for the largest eigenvalues by absolute value, and has to be called back on every outer iteration to go for largest or smallest real part instead. So if the spectrum of your matrix is larger in the imaginary axes than the real axes, that could also be a reason for the convergence problems. Unfortunately, there's not much I can think of to improve that case.

3 Comments

I expect some/if not all of these eigenvalues to be zero. Is is possible that this is the cause? I recall seeing up to machine precision zero eigenvalues before in other problems but this is just not giving me any result. Thoughts? Thanks!
If you know 0 is an eigen value, so why not remove the kernel, meaning compute the eigen-values of the orthogonal projection on the kernel? Using function NULL to get the basis of the kernel.
It's is impossible that all the eigen values are 0 unless your matrix is 0.
Yes good idea but I was in fact trying to prove this case that 0 is indeed an eigenvalue. I will lose the point once I suppress that output.

Sign in to comment.

More Answers (0)

Categories

Find more on Linear Algebra in Help Center and File Exchange

Asked:

on 12 Oct 2018

Commented:

on 23 Oct 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!