Main Content

Extract Lane Information from Recorded Camera Data for Scene Generation

This example shows how to extract the lane information required for generating high-definition scenes from raw camera data.

Lane boundaries are crucial for interpreting the position and motion of moving vehicles. They are also useful for localizing vehicles on map.

In this example, you:

  • Detect lane boundaries from recorded forward-facing monocular camera images using a pretrained deep learning model.

  • Transform detected lane boundaries from image coordinates to real-world vehicle coordinates using monocular camera sensor parameters.

  • Smooth noisy lane boundaries by using a singer acceleration extended Kalman filter, to reduce lane detection errors.

You can use the filtered lane information to generate a high-definition road scene. For more information about creating scene from lane information, see the Generate High Definition Scene from Lane Detections example.

Load Camera Sensor Data

Download a ZIP file containing the camera sensor data with camera parameters, and then unzip the file. This data set has been collected using a forward-facing camera mounted on an ego vehicle.

dataFolder = tempdir;
dataFilename = "PolysyncSensorData.zip";
url = "https://ssd.mathworks.com/supportfiles/driving/data/"+dataFilename;
filePath = fullfile(dataFolder, dataFilename);
if ~isfile(filePath)
    websave(filePath,url);
end
unzip(filePath, dataFolder);
dataset = fullfile(dataFolder,"PolysyncSensorData");
data = load(fullfile(dataset,"sensorData.mat"));
monocamData = data.CameraData;

monocamData data is a table with two columns:

  • timeStamp — Time, in microseconds, at which the image data was captured.

  • fileName — Filenames of the images in the data set.

The images are located in the Camera folder in the dataset directory. Create a table that contains the file paths of these images with their relative timestamps by using the helperUpdateTable function.

imageFolder = "Camera";
monocamData  = helperUpdateTable(monocamData,dataset,imageFolder);

Display first five entries of monocamData.

monocamData(1:5,:)
ans=5×2 table
    timeStamp                                       filePath                                  
    __________    ____________________________________________________________________________

    1.4616e+15    {["C:\Users\adeshmuk\AppData\Local\Temp\PolysyncSensorData\Camera\001.png"]}
    1.4616e+15    {["C:\Users\adeshmuk\AppData\Local\Temp\PolysyncSensorData\Camera\002.png"]}
    1.4616e+15    {["C:\Users\adeshmuk\AppData\Local\Temp\PolysyncSensorData\Camera\003.png"]}
    1.4616e+15    {["C:\Users\adeshmuk\AppData\Local\Temp\PolysyncSensorData\Camera\004.png"]}
    1.4616e+15    {["C:\Users\adeshmuk\AppData\Local\Temp\PolysyncSensorData\Camera\005.png"]}

Display the first image from the monocamData.

img = imread(monocamData.filePath{1});
imshow(img)

Detect Lane Boundaries

In this example, you use a pretrained deep neural network model to detect lane boundaries. Download the pretrained model available as a ZIP file and then unzip. This model requires the Deep Learning Toolbox™ Converter for ONNX™ Model Format support package. You can install Deep Learning Toolbox™ Converter for ONNX™ Model Format from the Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons The downloaded model is 90 MB.

modelFilename = "CameraLaneDetectorRESA.zip";
modelUrl = "https://ssd.mathworks.com/supportfiles/driving/data/"+modelFilename;
filePath = fullfile(dataFolder,modelFilename);
if ~isfile(filePath)
    websave(filePath,modelUrl);
end
unzip(filePath,dataFolder);
modelFolder = fullfile(dataFolder,"CameraLaneDetectorRESA");
model = load(fullfile(modelFolder,"cameraLaneDetectorRESA.mat"));
laneBoundaryDetector = model.net;

Specify these parameters for the lane boundary detection model.

  • Threshold — Confidence score threshold for boundary detection. The model ignores detections with a confidence score less than the threshold value. If you observe false detections, try increasing this value.

  • CropHeight — Crop height of detection image. The model crops the images to the specified height, removing everything above that line and processing the images from the bottom to the specified height. Specify this value to remove parts of the image above the road.

  • ExecutionEnvironment — Execution environment, specify as "cpu", "gpu", or "auto".

params.threshold = 0.5;
params.cropHeight = 190;
params.executionEnvironment = "auto";

The helperDetectLaneBoundaries function detects lane boundaries for each image in monocamData. Note that, depending on your hardware configuration, this function takes a significant amount of time to run.

laneBoundaries = helperDetectLaneBoundaries(laneBoundaryDetector,monocamData,params);

laneBoundaries is an M-by-6 cell array containing lane boundary points in image coordinates. M represents the number of images. The six columns represent the lanes detected in the image, from left to right. An empty cell indicates that no lane was detected for that timestamp.

Note: The Deep Learning model detects only the lane boundaries. It does not classify the lane boundaries into classes such as solid and dashed.

Read an image from the camera data.

imgIdx = 10;
I = imread(monocamData.filePath{imgIdx});

Overlay the lane boundaries on the image, and display the overlaid image using the helperViewLaneOnImage function.

helperViewLaneOnImage(laneBoundaries(imgIdx,:),I)

Transform Lane Boundary Coordinates to Vehicle Coordinates

To generate a real-world scene, you must transform the detected lane boundaries from image coordinates to vehicle coordinates using the camera parameters. If you do not know the camera parameters, you can estimate them. For more information about estimating camera parameters, see Calibrate a Monocular Camera. You can also use the estimateMonoCameraFromScene function to estimate approximate camera parameters directly from a camera image.

Specify the camera intrinsic parameters of focal length (fx, fy), principal point (cx, cy), and image size.

intrinsics = data.Intrinsics
intrinsics = struct with fields:
           fx: 800
           fy: 800
           cx: 320
           cy: 240
    imageSize: [480 640]

Create a cameraIntrinsics object.

focalLength = [intrinsics.fx intrinsics.fy];
principalPoint = [intrinsics.cx intrinsics.cy];
imageSize = intrinsics.imageSize;
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);

Create a monoCamera object using the camera intrinsic parameters, height, and location. Display the object properties.

camHeight = data.cameraHeight;
camLocation = data.cameraLocation;
sensorParams = monoCamera(intrinsics,camHeight,"SensorLocation",camLocation)
sensorParams = 
  monoCamera with properties:

        Intrinsics: [1×1 cameraIntrinsics]
        WorldUnits: 'meters'
            Height: 1.1000
             Pitch: 0
               Yaw: 0
              Roll: 0
    SensorLocation: [2.1000 0]

The helperDetectionsToVehicle function transforms the lane boundary points from image coordinates to vehicle coordinates. The function also fits the parabolic lane boundary model to the transformed boundary points using the fitPolynomialRANSAC function.

transformedDetections = helperDetectionsToVehicle(laneBoundaries,sensorParams);

Visualize the transformed points in a bird's-eye-view.

currentFigure = figure(Name="Lane Detections");
hPlot = axes(uipanel(currentFigure));
bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot);
helperPlotLanes(bep,transformedDetections)

Postprocess Lane Boundaries Using Tracker

If your lane boundaries contain detection noise, you can smooth the noisy lane boundaries, using a multi-object GNN tracker, to create a more accurate scene.

Define a multi-lane GNN tracker using the trackerGNN (Sensor Fusion and Tracking Toolbox)object. This example specifies these properties:

  • ConfirmationThreshold — Specified as [12 15], to confirm the track if the lane is detected for at least 12 out of the past 15 detections.

  • DeletionThreshlod — Specified as [5 5], to remove a track if the track is missing from the 5 most recent consecutive detections.

  • AssignmentThreshold — Specified as 20, this sets the normalized maximum distance at which to assign a detection to a track.

This example uses the singer (Sensor Fusion and Tracking Toolbox)acceleration model to handle the lane boundary changes over time. The helperInitSingerLane function uses the initsingerekf (Sensor Fusion and Tracking Toolbox) function with modifications to the filter parameters. In this helper function, the decay constant tau is set to 1 as the lane change maneuver time is relatively short.

Note: The three dimensions defined for the Singer acceleration state are the three coefficients of the parabolic model fitted on the lane boundary.

lbTracker = trackerGNN(FilterInitializationFcn=@helperInitSingerLane, ...
        ConfirmationThreshold=[12 15],...
        DeletionThreshold=[5 5],...
        AssignmentThreshold=20);

% Define variable to store the smooth lane boundaries.
smoothLaneBoundaries = {};

Specify the detection timestamps for the tracker. Note that the tracker requires timestamps in seconds, so you must convert the timestamps from the sensor from microseconds to seconds.

% Load the timestamps generated by the sensor.
timeStamps = double(monocamData.timeStamp);
% The timestamps of the camera sensor are in microseconds. Convert to
% seconds and offset from the first timestamp.
tsecs = timeStamps*(10^-6);
tsecs = tsecs - tsecs(1);

Specify the measurement noise as a three-element row vector. The measurement noise describes the uncertainity in the measurements. You can change this value to adjust the performance of the model on your data set.

measurementNoise = [1e-6 1e-4 0.1];

Smooth the lane boundary detections using the lbTracker object. The helperPackAsObjectDetections function converts the lane boundary detections to the object detections that the lbTracker requires. After smoothing the detections, the helperExtractLaneBoundaries function converts the tracker output back to lane boundaries.

for i = 1:size(transformedDetections,1)
    % Convert the lane boundary detections to object detections.
    measurements = helperPackAsObjectDetections(transformedDetections(i,:),tsecs(i),measurementNoise);

    % Track the detections.
    tracks = lbTracker(measurements,tsecs(i));
    
    % Apply post-processing on the track to return in usable format.
    tracks = helperExtractLaneBoundaries(tracks);

    smoothLaneBoundaries(end+1,:) = tracks;
end

Visualize and compare the lane boundaries before and after smoothing.

currentFigure = figure(Name="Compare Lane Boundaries",Position=[0 0 1400 600]);
hPlotSmooth = axes(uipanel(currentFigure,Position=[0 0 0.5 1],Title="Smooth Boundaries"));
bepSmooth = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlotSmooth);
hPlot = axes(uipanel(currentFigure,Position=[0.5 0 0.5 1],Title="Detected Boundaries"));
bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot);
helperCompareLanes(bepSmooth,smoothLaneBoundaries,bep,transformedDetections);

Append the timestamps for each detection, and convert the cell array to a table for use with the updateLaneSpec function. Using this function, you can map the smoothed lane boundaries to a standard-definition road network to create a high-definition road scene. For more information, see the Generate High Definition Scene from Lane Detections example.

smoothLaneBoundaries = [num2cell(timeStamps) smoothLaneBoundaries];
smoothLaneBoundaries = cell2table(smoothLaneBoundaries,VariableNames=["timeStamps" "Boundary 1" "Boundary 2" "Boundary 3" "Boundary 4" "Boundary 5" "Boundary 6"]);

Display the table. Note that the first few entries of the table are empty, as tracker requires a few frames to initialize before it can produce consistent detections.

smoothLaneBoundaries
smoothLaneBoundaries=714×7 table
    timeStamps            Boundary 1                     Boundary 2                     Boundary 3              Boundary 4      Boundary 5      Boundary 6 
    __________    ___________________________    ___________________________    ___________________________    ____________    ____________    ____________

    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {0×0 double               }    {0×0 double               }    {0×0 double               }    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {0×0 double}    {0×0 double}    {0×0 double}
    1.4616e+15    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {1×1 parabolicLaneBoundary}    {0×0 double}    {0×0 double}    {0×0 double}
      ⋮

You can use the smoothLaneBoundaries data in the Generate High Definition Scene from Lane Detections example to generate ASAM OpenDRIVE® or Road Runner scene from these detections.

Helper Functions

helperDetectLaneBoundaries — Detect lane boundaries in images.

function dets = helperDetectLaneBoundaries(model,images,params)

% Set the networkInputSize as [368,640].
params.networkInputSize = [368 640];


% Initialize detections.
dets = [];
f = waitbar(0,"Detecting lanes...");
% Loop over the images and detect.
for i = 1:size(images,1)

    I = imread(images.filePath{i});
    waitbar(i/numel(images),f,"Detecting lanes....")
    detections = helperDetectLaneOnSingleImage(model,I,params,params.executionEnvironment,FitPolyLine=false);

    % Append the detections.
    dets = [dets; detections];
end
close(f)

end

helperDetectionsToVehicle — Transform points from image to vehicle coordinates using the imageToVehicle function and fit a parabolic model to them using the fitPolynomialRANSAC function.

function converted = helperDetectionsToVehicle(laneMarkings,sensor)
% Convert detections from image frame to vehicle frame and fit polynomial
% on the detections.
numframes = size(laneMarkings,1);
numLanes = size(laneMarkings,2);
converted = cell(size(laneMarkings));

% Specify the boundary model as parabolicLaneBoundary.
model = @parabolicLaneBoundary;

% Specify the polynomial degree as 2 for parabolic lane boundaries.
degree = 2;

% Specify the extent for considering detections as from 3 meters to 30 meters
extent = [3 30];

% Set the max distance of points to consider in the lane boundary model as
% 0.3 meters.
n = 0.3;

% For each frame.
for i = 1:numframes
    % For each detected lane boundary.
    for j = 1:numLanes
        if ~isempty(laneMarkings{i,j})
            imageCoords = laneMarkings{i,j};

            % Function imageToVehicle performs the transformation.
            vehicleCoords = imageToVehicle(sensor,imageCoords);

            % Fits a parabolic model on the points.
            p = fitPolynomialRANSAC(vehicleCoords,degree,n);

            % Create the parabolicLaneBoundary object.
            boundaries = model(p);
            boundaries.XExtent = extent;
            converted{i,j} = boundaries;
        end
    end
end
end

helperPackAsObjectDetections — Return lane boundary detections as a cell array of objectDetection (Sensor Fusion and Tracking Toolbox) objects.

function packedDetections = helperPackAsObjectDetections(detections,time,mnoise)
% helperPackAsObjectDetections create objectDetection object from lane detections.

mnoiseDiag = diag(mnoise);
numdets = size(detections,2);
packedDetections = cell(numdets,1);
for i = 1:numdets
    if ~isempty(detections{1,i}) && isa(detections{1,i},"parabolicLaneBoundary")
        packedDetections{i,1} = objectDetection(time,detections{1,i}.Parameters,MeasurementNoise=mnoiseDiag,SensorIndex=1);
    end
end
packedDetections = packedDetections(~cellfun("isempty",packedDetections));
end

helperExtractLaneBoundaries — Extract objectTracks and convert them to a parabolic lane boundary model.

function trackedLanes = helperExtractLaneBoundaries(tracks)
% helperExtractLaneBoundaries construct lane boundaries from objectTrack

trackedLanes = cell(1,6);
if ~isempty(tracks)
    collectedTracks = [];
    numtracks = size(tracks,1);
    for i = 1:numtracks
        % Create a parabolicLaneBoundary object from the states.
        laneBoundary = parabolicLaneBoundary(tracks(i,1).State([1,4,7])');

        % Set the XExtent from 3 to 30 meters.
        laneBoundary.XExtent = [3 30];
        collectedTracks = [collectedTracks laneBoundary];
    end
    
    % Arrange the lane Boundaries in left to right based on the offset
    % values. 
    params = [collectedTracks.Parameters];
    offsets = params(3:3:end);
    [~,ind] = sort(offsets,"descend");
    collectedTracks = collectedTracks(ind);

    % Place the lane boundaries in the cell array.
    for i = 1:numel(collectedTracks)
        trackedLanes{1,i} = collectedTracks(i);
    end
end
end

helperInitSingerLane — Singer motion model for the lane boundary filter.

function filter = helperInitSingerLane(detection)
% helperInitSingerLane Initialize singer model for lane detection. 

filter = initsingerekf(detection);
tau = 1; % To accomodate short As lane change maneuver time.
filter.StateTransitionFcn = @(state,dt)singer(state,dt,tau);
filter.StateTransitionJacobianFcn = @(state,dt)singerjac(state,dt,tau);
filter.ProcessNoise = singerProcessNoise(zeros(9,1),1,tau,1);
end

See Also

| | (Sensor Fusion and Tracking Toolbox) | (Sensor Fusion and Tracking Toolbox) | | |

Related Topics