How to fit fisheye image distortion coefficients using deep learning?

32 views (last 30 days)
Knowing the one-to-one correspondence between the coordinates of the many distorted and non-distorted pixel points of a fisheye image, how should I fit the 4 distortion coefficients of the fisheye parameters(MappingCoefficients) by deep learning? My program works fine but does not converge, I don't know what's wrong, if you find a problem or a better suggestion, thank you very much!
In the process, I have tried hard to optimize the global/optimization toolbox of lsqonlin, fminsearch, paticleswarm built-in functions and other algorithms cannot solve a valid solution.
The programs and images are all attached. The entry script is test3.m
matlab version: R2022a
test3
References:
  2 Comments
Bjorn Gustavsson
Bjorn Gustavsson on 28 Sep 2022
I am terribly sorry for having to phrase this so bluntly but I don't know how to get the message across in a different way. Looking at fish-eye optics and images taken with such lenses from a "distortion-undistortion" perspective is to constrain the thinking in a way that is simultaneously oversimplified a overcomplicated. The wall in you image might not be flat and vertical, as far as I know it might very well be built by Gaudi and bend this way or that way. The supposedly flat checker-board might be a funnily patterened beach-ball. The point is that this task of camera calibration should be explicitly done to find the pair of functions mapping pixels (u,v) to spherical angles (azimuth and polar angles relative to the camera and its optical axis):
and its functional inverse:
and the camera position and rotation parameters.
If one cuts the problem there the camera-calibration task becomes far cleaner, and the "undistortion"-operations also becomes far simpler in terms of projection-mapping operations of/on the images.

Sign in to comment.

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!