Main Content


Detect objects in monocular camera using YOLO v4 deep learning detector

Since R2022a


The yolov4ObjectDetectorMonoCamera object contains information about you only look once version 4 (YOLO v4) object detector that is configured for use with a monocular camera sensor. To detect objects in an image captured by the camera, pass the detector to the detect object function.

When using the detect object function with a yolov4ObjectDetectorMonoCamera object, use of a CUDA®-enabled NVIDIA® GPU is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).


  1. Create a yolov4ObjectDetector object by calling the trainYOLOv4ObjectDetector function with training data (requires Deep Learning Toolbox™).

    detector = trainYOLOv4ObjectDetector(trainingData,____);
  2. Create a monoCamera object to model the monocular camera sensor.

    sensor = monoCamera(____);
  3. Create a yolov4ObjectDetectorMonoCamera object by passing the detector and sensor as inputs to the configureDetectorMonoCamera function. The configured detector inherits property values from the original detector.

    configuredDetector = configureDetectorMonoCamera(detector,sensor,____);


expand all

This property is read-only.

Camera configuration, specified as a monoCamera object. The object contains the camera intrinsics, the location, the pitch, yaw, and roll placement, and the world units for the parameters. Use the intrinsics to transform the object points in the image to world coordinates, which you can then compare to the values in the WorldObjectSize property.

This property is read-only.

Range of object widths and lengths in world units, specified as a [minWidth maxWidth] vector or [minWidth maxWidth; minLength maxLength] matrix. Specifying the range of object lengths is optional.

Name of the classification model, specified as a character vector or string scalar. By default, the name is set to the ModelName property value of yolov4ObjectDetector object specified at the input. You can modify this name after creating the yolov4ObjectDetectorMonoCamera object.

This property is read-only.

Trained YOLO v4 object detection network, specified as a dlnetwork (Deep Learning Toolbox) object. This object stores the layers that are used within the YOLO v4 object detector.

This property is read-only.

Names of the object classes that the YOLO v4 object detector was trained to find, specified as a cell array of character vectors. This property is set by the ClassNames property value of yolov4ObjectDetector object specified at the input

This property is read-only.

Size of anchor boxes, specified as an N-by-1 cell array. N is the number of output layers in the YOLO v4 deep learning network. Each cell contains an M-by-2 matrix, where M is the number of anchor boxes in that layer. Each row in the M-by-2 matrix denotes the size of an anchor box in the form [height width]. This property is set by the AnchorBoxes property value of yolov4ObjectDetector object specified at the input.

The anchor boxes are defined when creating the YOLO v4 network by using the yolov4ObjectDetector

Object Functions

detectDetect objects using YOLO v4 object detector configured for monocular camera


collapse all

Configure a YOLO v4 object detector for use with a monocular camera mounted on an ego vehicle. Use this detector to detect vehicles within a video captured by the camera.

Load a yolov4ObjectDetector object pretrained to detect vehicles.

detector = yolov4ObjectDetector("csp-darknet53-coco");

Model a monocular camera sensor by creating a monoCamera object. This object contains the camera intrinsics and the location of the camera on the ego vehicle.

focalLength = [309.4362 344.2161];    % [fx fy]
principalPoint = [318.9034 257.5352]; % [cx cy]
imageSize = [480 640];                % [mrows ncols]
height = 2.1798;                      % height of camera above ground, in meters
pitch = 14;                           % pitch of camera, in degrees
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);

sensor = monoCamera(intrinsics,height,Pitch=pitch);

Configure the detector for use with the camera. Limit the width of detected objects to 1.5-2.5 meters. The configured detector is a yolov4ObjectDetectorMonoCamera object.

vehicleWidth = [1.5 2.5];
detectorMonoCam = configureDetectorMonoCamera(detector,sensor,vehicleWidth);

Set up the video reader and read the input monocular video.

videoFile = '05_highway_lanechange_25s.mp4';
reader = VideoReader(videoFile);

Create a video player to display the detection results. Detect the vehicles in each frame by using the configured detector. Annotate the video frames with the bounding boxes for the detections and the detection confidence scores.

videoPlayer = vision.VideoPlayer();
while hasFrame(reader)
    frame = readFrame(reader);
    % Run the detector
    [bboxes,scores,labels] = detect(detectorMonoCam,frame,Threshold=0.6);
    if ~isempty(bboxes)
        frame = insertObjectAnnotation(frame,"Rectangle",bboxes,labels,AnnotationColor="green");

Close the video player.


Version History

Introduced in R2022a