rcond warning differs from computed value of rcond
36 views (last 30 days)
Show older comments
Suppose I have a matrix M.
M = [1, 2, 3; 3, 4, 5; 5, 6, 7]
M =
1 2 3
3 4 5
5 6 7
This is obviously a singular matrix, so the following orange warning is no surprise if I try to invert the matrix.
>> inv(M)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 7.031412e-18.
I can compute the value of RCOND displayed in the warning with the MATLAB command rcond:
>> rcond(M)
ans =
7.0314e-18
So now I want to solve a large system Ax=b where A is sparse (which is 5185x5185 and I unfortunately can't easily share) and I get the singular matrix warning.
>> A\b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 6.163871e-22.
My question is, why does the following computation not agree with what MATLAB outputs in the warning?
>> rcond(full(A))
ans =
6.7160e-10
Any ideas on how I can isolate this issue? I have a hard time believing this large discrepancy is due to using sparse matrices, but have no other leads on what could be happening.
Unfortunately this forum is flooded with people asking why their specific problem is ill-conditioned so searching did me no good.
6 Comments
Bjorn Gustavsson
on 4 Sep 2020
Ok, that gives you another estimate of the condition-number (~6.7e8 - just for laughts and giggles?)
When it comes to ill-conditioned linear inverse problems I'm an enthuastic proponent of explicit calculations of the svd - just to get the entire singular-value spectrum and both the data and model-space eigenvectors. Since I work in physics the problems I solve always have noise so some filtering/damping of solutions are typically required and a standard Tikhonov solution (0th or 2nd order) is my first stab at these type of problems. A useful matlab-toolbox for this is regtools - it has a good documentation too. (In short: It is not the big singular values that is the problem - it is all the very small ones that lead to noise-amplification)
Accepted Answer
Bruno Luong
on 4 Sep 2020
Edited: Bruno Luong
on 4 Sep 2020
When your matrix is ill-conditionned, everything computing that is related to the subspaces of the smallest eigen values are affected by numerical noise and truncation, including the estimation of the smallest eigen value.
(However the largest engen space is still stable numerically).
So when the condition number is large, the estimation of the condition number itself become unstable. Since the condition number is estimated as the ratio of the largest eigen values (stable) and the smallest ones, which is instable.
Rule of thumb if true your condition number is >= 1e10, the estimate of condition number by various methods become meaningless.
Bellow that it's OK.
7 Comments
Bjorn Gustavsson
on 4 Sep 2020
Edited: Bjorn Gustavsson
on 4 Sep 2020
To add my recommendation to the argument (fwiw): When you have an ill-conditioned/ill-posed/mixed-determined problem (and it is not so hard-connected to a limiting value of the condition-number of the matrix) you should preferably do your own regularization, since you know what regularizations are appropriate for your problem. In my problems it is typically appropriate with smooth solutions, which I can regularize towards using 2nd order Tikhonov regularization, and adapt the regularization to the estimated noise-level of the data. These considerations matlabs \ or pinv cannot know about, therefore I have to do that work.
Bruno Luong
on 4 Sep 2020
Unshamed self-promote: I have create a simple PSSEUDOINVERSE class based on Tikhonov regularization https://www.mathworks.com/matlabcentral/fileexchange/25453-pseudo-inverse
Users still have to decide how strong the regularization he/she wants. Selecting the right regularization is quite hard to make it automatic and robust for all kind of problem.
More Answers (0)
See Also
Categories
Find more on Linear Algebra in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!