This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

cameraParameters

Object for storing camera parameters

Description

The cameraParameters object stores the intrinsic, extrinsic, and lens distortion parameters of a camera.

Creation

You can create a cameraParameters object using the cameraParameters function described here. You can also create a cameraParameters object by using the estimateCameraParameters with an M-by-2-by-numImages array of input image points, where M is the number of keypoint coordinates in each pattern.

Syntax

cameraParams = cameraParameters
cameraParams = cameraParameters(Name,Value)
cameraParams = cameraParameters(paramStruct)

Description

cameraParams = cameraParameters creates a cameraParameters object that contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

example

cameraParams = cameraParameters(Name,Value) sets properties of the cameraParameters object by using one or more Name,Value pair arguments. Unspecified properties use default values.

cameraParams = cameraParameters(paramStruct) creates an identical cameraParameters object from an existing cameraParameters object with parameters stored in paramStruct.

Input Arguments

expand all

Stereo parameters, specified as a stereo parameters struct. To get a paramStruct from an existing cameraParameters object, use the toStruct function.

Properties

expand all

Intrinsic camera parameters:

Projection matrix, specified as a 3-by-3 identity matrix. The object uses the following format for the matrix format:

[fx00sfy0cxcy1]

The coordinates [cx cy] represent the optical center (the principal point), in pixels. When the x and y axis are exactly perpendicular, the skew parameter, s, equals 0.

fx = F*sx
fy = F*sy
F, is the focal length in world units, typically expressed in millimeters.
[sx, sy] are the number of pixels per world unit in the x and y direction respectively.
fx and fy are expressed in pixels.

Optical center, specified as a 2-element vector [cx,cy] in pixels. The vector contains the coordinates of the optical center of the camera.

Focal length in x and y, specified as a 2-element vector [fx, fy].

fx = F * sx
fy = F * sy
F is the focal length in world units, typically in millimeters, and [sx, sy] are the number of pixels per world unit in the x and y direction respectively. Thus, fx and fy are in pixels.

The focal length F influences the angle of view and thus affects the area of the scene that appears focused in an image. For a fixed subject distance:

  • A short focal length offers a wide angle of view allowing to capture large area of the scene under focus. It emphasizes both the subject and the scene background.

  • A long focal length offers a narrow angle of view, thus reducing the area of the scene under focus. It emphasizes more on the subject and restricts the amount of background from being captured.

Camera axes skew, specified as a scalar. If the x and the y axes are exactly perpendicular, then set the skew to 0.

Camera lens distortion:

Radial distortion coefficients, specified as either a 2- or 3-element vector. When you specify a 2-element vector, the object sets the third element to 0. Radial distortion occurs when light rays bend more near the edges of a lens than they do at its optical center. The smaller the lens, the greater the distortion. The camera parameters object calculates the radial distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted), as follows:

xdistorted = x(1 + k1*r2 + k2*r4 + k3*r6)

ydistorted= y(1 + k1*r2 + k2*r4 + k3*r6)

x, y = undistorted pixel locations
k1, k2, and k3 = radial distortion coefficients of the lens
r2 = x2 + y2
Typically, two coefficients are sufficient. For severe distortion, you can include k3. The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Tangential distortion coefficients, specified as a 2-element vector. Tangential distortion occurs when the lens and the image plane are not parallel. The camera parameters object calculates the tangential distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted). The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Tangential distortion occurs when the lens and the image plane are not parallel. The tangential distortion coefficients model this type of distortion.

The distorted points are denoted as (xdistorted, ydistorted):

xdistorted = x + [2 * p1 * x * y + p2 * (r2 + 2 * x2)]

ydistorted = y + [p1 * (r2 + 2 *y2) + 2 * p2 * x * y]

  • x, y — Undistorted pixel locations. x and y are in normalized image coordinates. Normalized image coordinates are calculated from pixel coordinates by translating to the optical center and dividing by the focal length in pixels. Thus, x and y are dimensionless.

  • p1 and p2 — Tangential distortion coefficients of the lens.

  • r2: x2 + y2

Extrinsic camera parameters:

3-D rotation matrix, specified as a 3-by-3-by-P, with P number of pattern images. Each 3-by-3 matrix represents the same 3-D rotation as the corresponding vector.

The following equation provides the transformation that relates a world coordinate in the checkerboard’s frame [X Y Z] and the corresponding image point [x y]:

s[xy1]=[XYZ1][Rt]K

R is the 3-D rotation matrix.
t is the translation vector.
K is the IntrinsicMatrix.
s is a scalar.
This equation does not take distortion into consideration. Distortion is removed by the undistortImage function.

3-D rotation vectors , specified as a M-by-3 matrix containing M rotation vectors. Each vector describes the 3-D rotation of the camera’s image plane relative to the corresponding calibration pattern. The vector specifies the 3-D axis about which the camera is rotated, where the magnitude is the rotation angle in radians. The corresponding 3-D rotation matrices are given by the RotationMatrices property

Camera translations, specified as an M-by-3 matrix. This matrix contains translation vectors for M images. The vectors contain the calibration pattern that estimates the calibration parameters. Each row of the matrix contains a vector that describes the translation of the camera relative to the corresponding pattern, expressed in world units.

The following equation provides the transformation that relates a world coordinate in the checkerboard’s frame [X Y Z] and the corresponding image point [x y]:

s[xy1]=[XYZ1][Rt]K

R is the 3-D rotation matrix.
t is the translation vector.
K is the IntrinsicMatrix.
s is a scalar.
This equation does not take distortion into consideration. Distortion is removed by the undistortImage function.

You must set the RotationVectors and TranslationVectors properties in the constructor to ensure that the number of rotation vectors equals the number of translation vectors. Setting only one property but not the other results in an error.

Estimated camera parameter accuracy:

Average Euclidean distance between reprojected and detected points, specified as a numeric value in pixels.

Estimated camera parameters accuracy, specified as an M-by-2-by-P array of [x y] coordinates. The [x y] coordinates represent the translation in x and y between the reprojected pattern key points and the detected pattern key points. The values of this property represent the accuracy of the estimated camera parameters. P is the number of pattern images that estimates camera parameters. M is the number of keypoints in each image.

World points reprojected onto calibration images, specified as an M-by-2-by-P array of [x y] coordinates. P is the number of pattern images and M is the number of keypoints in each image.

Settings for camera parameter estimation:

Number of calibration patterns that estimates camera extrinsics, specified as an integer. The number of calibration patterns equals the number of translation and rotation vectors.

World coordinates of key points on calibration pattern, specified as an M-by-2 array. M represents the number of key points in the pattern.

World points units, specified as a character vector. The character vector describes the units of measure.

Estimate skew flag, specified as a logical scalar. When you set the logical to true, the object estimates the image axes skew. When you set the logical to false, the image axes are exactly perpendicular.

Number of radial distortion coefficients, specified as the number '2' or '3'.

Estimate tangential distortion flag, specified as the logical scalar true or false. When you set the logical to true, the object estimates the tangential distortion. When you set the logical to false, the tangential distortion is negligible.

Object Functions

pointsToWorldDetermine world coordinates of image points
toStructConvert a camera parameters object into a struct
worldToImageProject world points into image

Examples

collapse all

Use the camera calibration functions to remove distortion from an image. This example creates a vision.CameraParameters object manually, but in practice, you would use the estimateCameraParameters or the cameraCalibrator app to derive the object.

Create a vision.CameraParameters object manually.

IntrinsicMatrix = [715.2699 0 0; 0 711.5281 0; 565.6995 355.3466 1];
radialDistortion = [-0.3361 0.0921]; 
cameraParams = cameraParameters('IntrinsicMatrix',IntrinsicMatrix,'RadialDistortion',radialDistortion); 

Remove distortion from the images.

I = imread(fullfile(matlabroot,'toolbox','vision','visiondata','calibration','mono','image01.jpg'));
J = undistortImage(I,cameraParams);

Display the original and the undistorted images.

figure; imshowpair(imresize(I, 0.5),imresize(J, 0.5),'montage');
title('Original Image (left) vs. Corrected Image (right)');

References

[1] Zhang, Z. “A flexible new technique for camera calibration”. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330–1334, 2000.

[2] Heikkila, J, and O. Silven. “A Four-step Camera Calibration Procedure with Implicit Image Correction”, IEEE International Conference on Computer Vision and Pattern Recognition, 1997.

Extended Capabilities

Introduced in R2014a