lsqlin unexpected results !!!

80 views (last 30 days)
SH
SH on 8 Apr 2024 at 12:11
Edited: Bruno Luong on 9 Apr 2024 at 12:01
I want to know the reason why there is a significant difference in the results between lsqlin and mldivide. Because ATA and ATB are in the range of 1e12 to 1e13,I have also tried various methods of scaling ATA and ATB, but lsqlin consistently produces unexpected results. When using lsqlin, the result is [15.5316, -29.8755, 15.3439; 14.4971, -29.1466, 15.6495; 13.0070 , -27.5122, 15.5052] , and it keeps chaning depending on the scaling. However when using mldivide, thre result is [1.3522, 0.1975, -0.0758; -0.0179, 1.6382, -0.1352; 0.1290, -0.1992, 1.5006]. Although only lsqlin method satisfies the condition of having row sums equal to 1, the result obtained from using mldivide appears to be more desirable. Pleas HELP !!!!
function ccm = colorCorrectionMatrix(inputImages, refImages)
ATA = inputPixelMatrix(inputImages);
ATB = refPixelMatrix(inputImages, refImages);
ATA = ATA / 1e10;
ATB = ATB / 1e10;
Aeq = kron(eye(3), [1, 1, 1]);
beq = ones(3, 1) ;
options = optimoptions('lsqlin', 'Display', 'iter', 'MaxIterations', 1e5, 'ConstraintTolerance', 0.1);
ccm_vector = lsqlin(ATA, ATB, [], [], Aeq, beq, [], [], [], options);
% ccm_vector = ATA \ ATB;
ccm = reshape(ccm_vector, [3, 3]).';
end
  4 Comments
SH
SH on 9 Apr 2024 at 6:15
Yesss that's correct.
The reason why I'm solving the least squares problem in the form of ATAx = ATB rather than directly using Ax = B is because the size of matrix A is too large.
Bruno Luong
Bruno Luong on 9 Apr 2024 at 8:03
Edited: Bruno Luong on 9 Apr 2024 at 8:13
To me it sounds like your constraints of row sum = 1 is unrealistic assumption. It assumes the luminance is 100% recover by RGB in both images, whic is usually not true as the CMOS has different sensitivity and spectrum response, let alone what other processing could do to the umahges.
I'm not surprise that the constraints would make some odd artefact to your transformation maytrix.

Sign in to comment.

Answers (1)

John D'Errico
John D'Errico on 9 Apr 2024 at 11:25
Unfortunately, you would have been better off using a direct least squares. That would allow us to see how well the calibration works on your original data, and then we could see the true cost of applying the row sum constraints. Could you have done that? YES. The matrix should not in fact be too large. Just too large to handle as you may have tried to solve it. We could probably look into that at some point, but we are stuck here for the moment.
You have constructed a matrix problem with 9 unknowns. As you have formulated it, you have a linear system of 9 equations, with 9 unknowns. I'll just give it here, scaling the matrix by the (1,1) element. This is completely arbitrary, as long as we scale both arrays by the same number.
ATA = [9011775676143.00 8793032645861.00 8854401712760.00 0 0 0 0 0 0
8793032645861.00 9012790632686.00 9347659231395.00 0 0 0 0 0 0
8854401712760.00 9347659231395.00 10149334263142.0 0 0 0 0 0 0
0 0 0 9011775676143.00 8793032645861.00 8854401712760.00 0 0 0
0 0 0 8793032645861.00 9012790632686.00 9347659231395.00 0 0 0
0 0 0 8854401712760.00 9347659231395.00 10149334263142.00 0 0 0
0 0 0 0 0 0 9011775676143.00 8793032645861.00 8854401712760.00
0 0 0 0 0 0 8793032645861.00 9012790632686.00 9347659231395.00
0 0 0 0 0 0 8854401712760.00 9347659231395.00 10149334263142.0];
ATB = [13251013080473.0
12961245252790.0
13049599009420.0
13046306245612.0
13343523674494.0
13782602026559.0
12697774297101.0
13365960132375.0
14510162005994.0]/ATA(1,1)
ATB = 9x1
1.4704 1.4383 1.4481 1.4477 1.4807 1.5294 1.4090 1.4832 1.6101
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
ATA = ATA/ATA(1,1)
ATA = 9x9
1.0000 0.9757 0.9825 0 0 0 0 0 0 0.9757 1.0001 1.0373 0 0 0 0 0 0 0.9825 1.0373 1.1262 0 0 0 0 0 0 0 0 0 1.0000 0.9757 0.9825 0 0 0 0 0 0 0.9757 1.0001 1.0373 0 0 0 0 0 0 0.9825 1.0373 1.1262 0 0 0 0 0 0 0 0 0 1.0000 0.9757 0.9825 0 0 0 0 0 0 0.9757 1.0001 1.0373 0 0 0 0 0 0 0.9825 1.0373 1.1262
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
So we have now normalized both arrays to be reasonable things. This allows lsqlin to work in comfort. It also points out that you actually have created three separate least squares problems. You just solved them together, and did a lot of work to do so. There are actually three separate and separable problems in this. Regardless...
The row sum constraints lie in the matrices Aeq and Beq.
Aeq = kron(eye(3), [1, 1, 1])
Aeq = 3x9
1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Beq = ones(3, 1) ;
As we see, Aeq is also of the same form. It acts separably on the three channels.
On to the real meat of the problem now, which is purely linear algebra. If we try to solve the matrix probem ATA*X=ATB, there is an EXACT solution to this linear system, since ATA is non-singular.
rank(ATA)
ans = 9
cond(ATA)
ans = 466.9997
So ATA has rank 9. And that means there is a unique solution, generated by mldivide. (commonly known as backslash.)
X0 = ATA\ATB
X0 = 9x1
1.3522 0.1975 -0.0758 -0.0179 1.6382 -0.1352 0.1290 -0.1992 1.5006
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
How well did it do? Compare the two vectors. The third column is the difference, the residual vector. As you can see, it is essentially perfect.
table(ATB,ATA*X0,ATA*X0 - ATB,VariableNames = {'Goal','Solution','Error'})
ans = 9x3 table
Goal Solution Error ______ ________ __________ 1.4704 1.4704 0 1.4383 1.4383 0 1.4481 1.4481 0 1.4477 1.4477 0 1.4807 1.4807 2.2204e-16 1.5294 1.5294 2.2204e-16 1.409 1.409 0 1.4832 1.4832 0 1.6101 1.6101 0
So the result is exact, at least exact to within floating point trash in double precision.
But when you try to enforce the inconsistent row sum constraints, this forces the transformation to now be rather nasty. And it is no longer even a good approximation.
Now solve it using lsqlin. This solution is also unique. There is EXACTLY one vector that solves the constrained problem, that also tries to solves the matrix problem. But it will not be a good solution. And it has that crazy looking solution. This is not an LSQLIN problem. I could have used other tools, and gotten exactly the same result. And there is no reason to fuss with the options you put into LSQLIN. They are a waste of space here.
Xcon = lsqlin(ATA,ATB,[],[],Aeq,Beq)
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Xcon = 9x1
15.5315 -29.8755 15.3439 14.4971 -29.1466 15.6495 13.0070 -27.5122 15.5052
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
The solution does indeed perfectly satisfy the constraints.
Aeq*Xcon
ans = 3x1
1.0000 1.0000 1.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
But it does not solve the linear system exactly. You can think of thois as the cost of requiring those row sums to be unity.
table(ATB,ATA*Xcon,ATA*Xcon - ATB,VariableNames = {'Goal','Solution','Error'})
ans = 9x3 table
Goal Solution Error ______ ________ _________ 1.4704 1.4572 -0.013196 1.4383 1.1915 -0.24673 1.4481 1.5521 0.10407 1.4477 1.4342 -0.013509 1.4807 1.2281 -0.25257 1.5294 1.6359 0.10653 1.409 1.397 -0.011985 1.4832 1.2591 -0.22408 1.6101 1.7047 0.094519
And nothing you can do will force lsqlin to do what you want, that is, solve the constrained problem that yields a transformation matrix where the matrix looks like anything you are used to seeing. The reason is, the row sum constraint is inconsistent with the regression problem you formulated.
  1 Comment
Bruno Luong
Bruno Luong on 9 Apr 2024 at 11:59
Edited: Bruno Luong on 9 Apr 2024 at 12:01
How well the unconstrained solution verifies the constraints?
>> X03x3 = reshape(X0,3,3)
X03x3 =
1.3522 -0.0179 0.1290
0.1975 1.6382 -0.1992
-0.0758 -0.1352 1.5006
>> sum(X03x3,1)
ans =
1.4739 1.4851 1.4304
>> Aeq*X0
ans =
1.4739
1.4851
1.4304
Not well IMO. Oddly or not the sum are almost equal an +45% more than the constraints (that's a lot).
The "luminances" are clearly not the same.
Negative coefficients in X0 meaning colors need to be substractED to match colors between images.

Sign in to comment.

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!