Main Content

Birds-Eye View

Transform front-facing camera image into top-down view

  • Birds-Eye View block

Libraries:
Vision HDL Toolbox / Geometric Transforms

Description

The Birds-Eye View block warps a front-facing camera image into a top-down view. It uses a hardware-efficient architecture that supports HDL code generation.

You must provide the homography matrix that describes the transform. This matrix can be calculated from physical camera properties, or empirically derived by analyzing an image of a grid pattern taken by the camera. The block uses the matrix to compute the transformed coordinates of each pixel. The transform does not interpolate between pixel locations. Instead it rounds the result to the nearest coordinate.

The block operates on a trapezoidal region of the input image below the vanishing point. These images show the input region selected for transformation and the resulting top-down view.

You can specify the number of lines in the transformed region and the size of the output frame. If the specified homography matrix cannot map from the requested number of lines to the requested output size, the block returns a warning.

Because the block replicates lines from the input region to create the larger output frame, it cannot complete the transform of one frame before the next frame arrives. The block ignores any new input frames while it is still transforming the previous frame. Therefore, depending on the stored lines and output size, the block can drop input frames. This timing also enables the block to maintain the blanking intervals of the input pixel stream.

Examples

Ports

This block uses a streaming pixel interface with a pixelcontrol bus for frame control signals. This interface enables the block to operate independently of image size and format. All Vision HDL Toolbox™ blocks use the same streaming interface. The block accepts and returns a scalar pixel value and a bus that contains five control signals. The control signals indicate the validity of each pixel and its location in the frame. To convert a frame (pixel matrix) into a serial pixel stream and control signals, use the Frame To Pixels block. For a full description of the interface, see Streaming Pixel Interface.

Input

expand all

Single image pixel in a pixel stream, specified as a scalar that represents grayscale intensity.

The software supports double and single data types for simulation, but not for HDL code generation.

Data Types: uint | int | fixed point | Boolean | double | single

The pixelcontrol bus contains five signals. The signals describe the validity of the pixel and its location in the frame. For more information, see Pixel Control Bus.

Data Types: bus

Output

expand all

Single image pixel in the pixel stream, returned as a scalar representing grayscale intensity. The output pixel data type is the same as the input pixel data type.

The software supports double and single data types for simulation, but not for HDL code generation.

Data Types: uint | int | fixed point | Boolean | double | single

The pixelcontrol bus contains five signals. The signals describe the validity of the pixel and its location in the frame. For more information, see Pixel Control Bus.

Data Types: bus

Parameters

expand all

Transfer function derived from camera parameters, specified as a 3-by-3 matrix.

The homography matrix, h, is derived from four intrinsic parameters of the physical camera setup: the focal length, pitch, height, and principal point (from a pinhole camera model). The default value is the matrix for the camera setup used in the Lane Detection example.

This matrix can be calculated from physical camera properties, or empirically derived by analyzing an image of a grid test pattern taken by the camera. See estimateGeometricTransform (Computer Vision Toolbox) or Using the Single Camera Calibrator App (Computer Vision Toolbox).

Number of input pixels to buffer, specified as an integer. Compute this value from Number of input lines to buffer*ActivePixelsPerLine. The block uses a memory of this size to store the input pixels. If you specify a value that is not a power of two, the block uses the next largest power of two.

Number of lines to transform, specified as an integer. The block stores and transforms this number of lines into the output bird's-eye view image, starting at the vanishing point as determined by the Homography matrix.

Storing the full input frame uses too much memory to implement the algorithm without off-chip storage. Therefore, for a hardware implementation, choose a smaller region to store and transform, one that generates an acceptable output frame size.

For example, using the default Homography matrix with an input image of 640-by-480 pixels, the full-sized transform results in a 900-by-640 output image. Analysis of the input-to-output x-coordinate mapping shows that around 50 lines of the input image are required to generate the top 700 lines of the bird's-eye view output image. This number of input lines can be stored using on-chip memory. The vanishing point for the default camera setup is around line 200, and lines above that point do not contribute to the resulting bird's-eye view. Therefore, the block can store only input lines 200–250 for transformation.

Horizontal size of output frame, specified as an integer. This parameter is the number of active pixels in each output line.

Vertical size of output frame, specified as an integer. This parameter is the number of active lines in each output frame.

Algorithms

The transform from input pixel coordinate (x,y) to the bird's-eye pixel coordinate is derived from the homography matrix, h. The homography matrix is based on physical parameters and therefore is a constant for a particular camera installation.

(x^,y^)=round(h11x+h12y+h13h31x+h32y+h33,h21x+h22y+h23h31x+h32y+h33)

The implementation of the bird's-eye transform in hardware does not directly perform this calculation. Instead, the block precomputes lookup tables for the horizontal and vertical aspects of the transform.

Architecture of the bird's-eye algorithm. The pixel stream goes to a line memory, then each line goes through a horizontal stretch operation and a vertical mapping operation.

First, the block stores the input lines starting from the precomputed vanishing point. The stored pixels form a trapezoid, with short lines near the vanishing point and wider lines near the camera. This storage uses Maximum buffer size, in pixels memory locations.

The horizontal lookup table contains interpolation parameters that describe the stretch of each line of the trapezoidal input region to the requested width of the output frame. Lines that fall closer to the vanishing point are stretched more than lines nearer to the camera.

The vertical lookup table contains the y-coordinate mapping, and how many times each line is repeated to fill the requested height of the output frame. Near the vanishing point, one input line maps to many output lines, while each line nearer the camera maps to a diminishing number of output lines.

The lookup tables use 3*Number of input lines to buffer memory locations.

Extended Capabilities

Version History

Introduced in R2017b

See Also

Blocks

Objects

Functions

Topics