Combine multiple objects to create Super Sampled representation

Hi, I have an image consisting of holes and want to create a "composite" representation by combining all of them.
I believe the idea is that because the centroid of each on jitters (i.e. is not in exactly the same location as seen by the red dots), its possible to to use this to create a super resolved reconstruction. As far as I udnerstand, I can for example consider 1/2 pixel and hence create a 18x18 (sub pixel) image from this 9x9 pixel image. So I need to start at the centroid and step 1/2 pixel away and record the actual real pixel value and then populate this in the 18x18 array. I do this for all 3 and then I can for example take the median on a pixel basis.
The problem is, considering the 1st image, Im not sure how to get the 1/2 pixel values from the centroid to then fill in the 18x18 array.
any pointers would be appreciated.
Thanks
Jason

12 Comments

I thought something like this would be useful
R2=imresize(ROI,[2*d 2*d],'nearest');
If d is my original number of rows and columns
..but it doesn't reference the actual sub-pixel centroid
  1. Please attach the individual images without the red dots, preferably as Matlab variables in a .mat file
  2. Is the idea that the 3 thumbnails are all sub-pixel translations of the same object?
  3. Does the intensity distribution follow some known model? Are they Gaussian blobs, for example?
Hi Matt, here are some matt files. (I also include the main 800x800 pixel one showing many objects)
  1. Is the idea that the 3 thumbnails are all sub-pixel translations of the same object? Yes
  2. Does the intensity distribution follow some known model? Are they Gaussian blobs, for example? They can be approximated to gaussian. Obviously there is shot noise there too.
Thanks
Jason
Thanks, but could you put all the variables in one .mat file? It is less tedious to download.
Sorry Matt, I dont follow - what variables?. do you mean the individual bead images - Im not sure how to combine them into one mat file. Could you t download say 3 of them and use that?
So basically you want to find the σ parameter of the Gaussian based on all the patches simultaneously? Once you know σ, you can generate a Gaussian blob of any size and resolution that you wish.
To save multiple variables to a .mat file:
a=1; b=2; c=3;
save matfilename a b c
Hi Matt, so eventually I will need to include distorted spots so definetly not gaussian, and I want to back away from your approach. What I was hoping to do is:
1: Obtain the sub pixel centroid (which I can do), so for a given bead, its centroid is xc, yc
2: For simplicity, assume 2x resolution (later I will want to go to 4x). So if my original isolated bead image (IM1) is say 9x9 pixels, create a blank matrix of 18x18, call it IM2
3: Starting on the original image at the centroid location, obtain the pixel value at each 1/2 pixel distance and popuplate the 18x18 matrix with these values.
4: Eventually, I will have 100 such 18x18 matrices that I will just take the median pixel value (pixel wise) and create my composite image, realising that its effective pixel size is now original pixel size/2.
Its mainly 3: Im stuggling with.
Hopefully this graphic illustrates what I want to do (albeit in 1D)
Thanks for any help
But the centroids are not exactly 1/2 pixels apart. If I understand, there is nothing known or guaranteed about the relative separation of thhe blobs. Therefore, if you interweave all the pixel values from all of the images, you will have some scattered (i.e., non-gridded) data set. You can interpolate the scattered data using griddata or scatteredInterpolant at the half pixel locations if you wish. However, you could have also done that interpolation with just one of the images. It's not clear what advantage you hope to get from the combined data. In both cases, interpolation will be required, and the resulting noise will be the same.
On the other hand, if you there is some parametric surface you are trying to fit, then the surface fitting process would benefit from having more data - it will reduce the noise in the fit.
Hi Matt, thanks for your thoughts. As I understand this is very similar to the slanted edge MTF where the slant gives you the ability to super resolve.
Coming back to the images, I have about 100 represenations of the same object, but the centroid jitters around, so sampling different parts of the pixel its captured on.
So this is as far as I have got;
ax=app.UIAxes2;
I=getimage(ax);
[sy,sx]=size(I);
[xc,yc] = getCentroid2(app,I);
%Create new "super resolved" array with twice the pixel density
%in each dimension
SR=zeros(2*sy+1,2*sx+1);
% First, fill the centre value
SR(sy,sx)=I(round(yc),round(xc));
Just to add
"If I understand, there is nothing known or guaranteed about the relative separation of thhe blobs. Therefore, if you interweave all the pixel values from all of the images, you will have some scattered (i.e., non-gridded) data set. "
So I intend to create a ROI around each blob first not the whole image with many blobs on, so treat each blob on its own. I would add save each super resolved in a stack and then perform stats on them at the end
I know I.A has mentione dsomethign similar before, but i can't find it. Im sure I've seen him say that if you take multiple images of the same object with sub pixel stage movement, you can get the same effect
Trying out Matt's suggestion of griddata
% Use grid data to interpolate sub pixel values but use
%"nearest"
x=1:sx;
y=1:sy;
v=I(y,x);
%Create query points
[xq,yq] = meshgrid(1:0.5:sx);
z1 = griddata(x,y,v,xq,yq,'nearest');
figure
plot3(x,y,v,'mo');
hold on
s=mesh(xq,yq,z1);
s.FaceColor = 'flat';
s.EdgeColor='k';
view(0,90)
Seems to be a step forward, but still not there.
  1. Not sure why the original image locations (magenta) are not shwoing correctly
  2. Nor is it "referenced" to the real centroid (red spot)
Hi Matt, thanks for your thoughts. As I understand this is very similar to the slanted edge MTF where the slant gives you the ability to super resolve.
But in that scenario, people are normally curve fitting. They assume that the LSF is a Gaussian lobe or a spline or something like that. That's why I asked you to begin with whether there was a parametric surface model that the samples are supposed to follow.

Sign in to comment.

 Accepted Answer

Here's an algebraic solution in which we model the blobs as circularly symmetric with a radial profile parametrized by cubic splines. The code assumes NxN image data with N odd and requires that you download func2mat from,
IM=load('CoarseImages').IM;
K=numel(IM);
N=length(IM{1});
Rad=(N-1)/2; %Lobe radius
obj=lobeFitter(Rad,Rad+1);
%Build equation matrix, A
tic;
A=cell(K,1);
c0=ones(1,obj.ncoeff);
for k=1:K
t=obj.displacement(IM{k});
A{k}=func2mat(@(c) obj.radialInterp(c,N,t),c0,'doSparse',0);
end
A=cell2mat(A); %final equation matrix
c=A\reshape([IM{:}],[],1); %compute spline coefficients, c, algebraically
toc
Elapsed time is 0.110118 seconds.
IMsuper = obj.getLobe(c,N,2); %Super-res image upsampled by 2
tiledlayout(1,2);
nexttile, imshow(IM{1},[]);
nexttile, imshow(IMsuper,[]);

5 Comments

Here's a variant that uses lsqlin to constrain the estimated blob profile to be monotonically decreasing. This regularizes the estimation, allowing you to use more spline knots (and hence more flexible splines).
IM=load('CoarseImages').IM;
K=numel(IM);
N=length(IM{1});
Rad=(N-1)/2; %Lobe radius
obj=lobeFitter(Rad,2*Rad); %twice as many spline knots as before
%Build equation matrix, A
tic;
A=cell(K,1);
c0=ones(1,obj.ncoeff);
for k=1:K
t=obj.displacement(IM{k});
A{k}=func2mat(@(c) obj.radialInterp(c,N,t),c0,'doSparse',0);
end
A=cell2mat(A); %final equation matrix
Q=diff(func2mat(@(c) obj.getProfile(c,N,20),c0,'doSparse',0)); %for monotnicity constraints
c=lsqlin(A, reshape([IM{:}],[],1) , Q, zeros(height(Q),1) ); %compute spline coefficients, c
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
toc
Elapsed time is 0.370543 seconds.
IMsuper = obj.getLobe(c,N,20); %Super-res image upsampled by 20
tiledlayout(1,3);
nexttile, imshow(IM{1},[]); title 'Measured'
nexttile, imshow(IMsuper,[]); title 'Super-resolved (20x)'
nexttile, plot(IMsuper(ceil(end/2),:)); axis square; title 'Radial Profile'
Thankyou Matt, this is awesome and I will accept your answer.
As an aside, I did find another way using Zero Padding in the frequency domain:
function [Isuper,sy,sx] = SuperSample(app,I,upf)
%uses zero padding in the fFT image
% I = Image, upf = upsample factor
f = fftshift(fft2(I)); [sy,sx]=size(I);
fabs=abs(f);
%upf=5;
D = padarray(fabs,[sy*upf-floor(sy/2) sx*upf-floor(sx/2)],0,'both');
Isuper=abs(fftshift(ifft2(D)));
[sy,sx]=size(Isuper);
end
But there doesn't appear to be any super-resolution here. You are not combining the multiple shifted images in any way that I can see.
Yes your right, I was going to upsample first, then centroid, recentre each one with this finer centroid and then combine them.
If so, you don't really need FFTs. You could just use imresize.

Sign in to comment.

More Answers (0)

Products

Release

R2022b

Asked:

on 16 Oct 2023

Commented:

on 23 Oct 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!