
[Image Alignment] Align two images taken under different zooming and lighting conditions
10 views (last 30 days)
Show older comments
I have 2 images taken the same object but under different zooming and lighting conditions (img1 and img2). I would like to apply a transformation matrix to img2, so that the result will be aligned with img1.
img1 (left) / img2 (right)

- If I place 2 images on top of each other, I got image on the left which shows 2 images are not aligned.
- If I apply a transform matrix A = [0.52 0 233; 0 1 0; 0 0 1] to img2, then combine, I got image on the right which shows kind of alignment.

I am trying to figure out how to find that transform matrix A. No need to be exact, close is good enough. I found A by manually try and error.
There are intensity-based image alignment methods I think cannot apply here because of the obvious difference in intensity.
Feature-based image alignment methods I tried included FAST, MinEigen, Harris, SIFT, SURF, KAZE, BRISK, ORB (link here)
, but so far nothing is fruitful.
Because the y-axis is fixed, the only thing needed to adapt is the x-axis (scaleX and translateX).
Any ideas would be appreciated.
Thanks.
0 Comments
Accepted Answer
Cris LaPierre
on 9 May 2025
img1 = imread('img1.jpg');
img2 = imread('img2.jpg');
[MOVINGREG] = registerImages(img2,img1)
imshowpair(img1,MOVINGREG.RegisteredImage)
function [MOVINGREG] = registerImages(MOVING,FIXED)
%registerImages Register grayscale images using auto-generated code from Registration Estimator app.
% [MOVINGREG] = registerImages(MOVING,FIXED) Register grayscale images
% MOVING and FIXED using auto-generated code from the Registration
% Estimator app. The values for all registration parameters were set
% interactively in the app and result in the registered image stored in the
% structure array MOVINGREG.
% Auto-generated by registrationEstimator app on 09-May-2025
%-----------------------------------------------------------
% Feature-based techniques require license to Computer Vision Toolbox
checkLicense()
% Convert RGB images to grayscale
FIXED = im2gray(FIXED);
MOVINGRGB = MOVING;
MOVING = im2gray(MOVING);
% Default spatial referencing objects
fixedRefObj = imref2d(size(FIXED));
movingRefObj = imref2d(size(MOVING));
% Detect SURF features
fixedPoints = detectSURFFeatures(FIXED,'MetricThreshold',649.525070,'NumOctaves',3,'NumScaleLevels',5);
movingPoints = detectSURFFeatures(MOVING,'MetricThreshold',649.525070,'NumOctaves',3,'NumScaleLevels',5);
% Extract features
[fixedFeatures,fixedValidPoints] = extractFeatures(FIXED,fixedPoints,'Upright',true);
[movingFeatures,movingValidPoints] = extractFeatures(MOVING,movingPoints,'Upright',true);
% Match features
indexPairs = matchFeatures(fixedFeatures,movingFeatures,'MatchThreshold',68.476562,'MaxRatio',0.684766);
fixedMatchedPoints = fixedValidPoints(indexPairs(:,1));
movingMatchedPoints = movingValidPoints(indexPairs(:,2));
MOVINGREG.FixedMatchedFeatures = fixedMatchedPoints;
MOVINGREG.MovingMatchedFeatures = movingMatchedPoints;
% Apply transformation - Results may not be identical between runs because of the randomized nature of the algorithm
tform = estimateGeometricTransform2D(movingMatchedPoints,fixedMatchedPoints,'affine');
MOVINGREG.Transformation = tform;
MOVINGREG.RegisteredImage = imwarp(MOVINGRGB, movingRefObj, tform, 'OutputView', fixedRefObj, 'SmoothEdges', true);
% Store spatial referencing object
MOVINGREG.SpatialRefObj = fixedRefObj;
end
function checkLicense()
% Check for license to Computer Vision Toolbox
CVTStatus = license('test','Video_and_Image_Blockset');
if ~CVTStatus
error(message('images:imageRegistration:CVTRequired'));
end
end
Here's a screenshot of the app interface so you can see the approximate settings I used.

More Answers (0)
See Also
Categories
Find more on Point Cloud Processing in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!