Extract Lane Information from Recorded Camera Data for Scene Generation
This example shows how to extract the lane information required for generating high-definition scenes from raw camera data.
Lane boundaries are crucial for interpreting the position and motion of moving vehicles. They are also useful for localizing vehicles on map.
In this example, you:
Detect lane boundaries in real-world vehicle coordinates from recorded forward-facing monocular camera images by using the
laneBoundaryDetector
object.Track noisy lane boundaries by using the
laneBoundaryTracker
object.
You can also use the Ground Truth Labeler app to resolves issues in detected lane boundaries. Then, you can use accurate lane boundary detections to generate high-definition road scene. For more information, see Generate RoadRunner Scene Using Labeled Camera Images and Raw Lidar Data example.
Load Camera Sensor Data
This example requires the Scenario Builder for Automated Driving Toolbox™ support package. Check if the support package is installed and, if it is not installed, install it using the Get and Manage Add-Ons.
checkIfScenarioBuilderIsInstalled
Download a ZIP file containing the camera sensor data with camera parameters, and then unzip the file. This data set has been collected using a forward-facing camera mounted on an ego vehicle.
dataFolder = tempdir; dataFilename = "PolysyncSensorData_23a.zip"; url = "https://ssd.mathworks.com/supportfiles/driving/data/"+dataFilename; filePath = fullfile(dataFolder, dataFilename); if ~isfile(filePath) websave(filePath,url); end unzip(filePath, dataFolder); dataset = fullfile(dataFolder,"PolysyncSensorData"); data = load(fullfile(dataset,"sensorData.mat")); monocamData = data.CameraData;
monocamData
data is a table with two columns:
timeStamp
— Time, in microseconds, at which the image data was captured.fileName
— Filenames of the images in the data set.
The images are located in the Camera
folder in the dataset
directory. To load these images for detection and tracking, create an ImageDatastore
object by using the imageDatastore
function.
imageFolderName = "Camera";
imageFolderPath = fullfile(dataset,imageFolderName);
imds = imageDatastore(imageFolderPath);
Preview the first image.
imshow(preview(imds));
Detect Lane Boundaries
Create a laneBoundaryDetector
object to detect lane boundaries from camera images. The laneBoundaryDetector
object requires the Deep Learning Toolbox™ Converter for ONNX™ Model Format support package. You can install Deep Learning Toolbox™ Converter for ONNX™ Model Format from the Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons
detector = laneBoundaryDetector;
Read an image from the datastore imds
.
imgIdx = 5; I = readimage(imds,imgIdx);
Detect lane boundary points in image coordinate by using the detect
method of the laneBoundaryDetector
object. Specify different parameters of the lane detector to get optimal performance. Overlay the lane boundary points on the image, and display the overlaid image using the helperViewLaneOnImage
function.
Note: The lane boundary detector detects only the lane boundary points. It does not classify the lane boundary points into classes such as solid and dashed.
laneBoundaryPoints = detect(detector,I,ROI=240,DetectionThreshold=0.3,OverlapThreshold=0.1); helperViewLaneOnImage(laneBoundaryPoints{1},I);
To generate a real-world scene, the detected lane boundary points must be in vehicle coordinate system. Specify camera parameters as input to the detect
method to obtain lane boundary points in vehicle coordinate system. If you do not know the camera parameters, you can estimate them. For more information about estimating camera parameters, see Calibrate a Monocular Camera. You can also use the estimateMonoCameraFromScene
function to estimate approximate camera parameters directly from a camera image.
Specify the camera intrinsic parameters of focal length (fx, fy), principal point (cx, cy), and image size.
intrinsics = data.Intrinsics
intrinsics = struct with fields:
fx: 800
fy: 800
cx: 320
cy: 240
imageSize: [480 640]
Create a cameraIntrinsics
object.
focalLength = [intrinsics.fx intrinsics.fy]; principalPoint = [intrinsics.cx intrinsics.cy]; imageSize = intrinsics.imageSize; intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);
Create a monoCamera
object using the camera intrinsic parameters, height, and location. Display the object properties.
camHeight = data.cameraHeight;
camLocation = data.cameraLocation;
sensorParams = monoCamera(intrinsics,camHeight,"SensorLocation",camLocation)
sensorParams = monoCamera with properties: Intrinsics: [1×1 cameraIntrinsics] WorldUnits: 'meters' Height: 1.1000 Pitch: 0 Yaw: 0 Roll: 0 SensorLocation: [2.1000 0]
When you specify camera intrinsics as input to the detect
method, it returns these outputs:
laneBoundaryPoints
— Lane boundary points in vehicle coordinate system.laneBoundaries
— Lane boundaries with parabolic lane boundary model.
Specify different parameters of the lane detector to get optimal performance. Note that, depending on your hardware configuration, the detect
method takes a significant amount of time to run.
[laneBoundaryPoints,laneBoundaries] = detect(detector,imds,sensorParams,ROI=240,DetectionThreshold=0.3,OverlapThreshold=0.1,MiniBatchSize=8);
Visualize the lane boundary detections in a bird's-eye-view by using the helperPlotDetectedLanesBEV
function.
currentFigure = figure(Position=[0 0 1400 600]); hPlot = axes(uipanel(currentFigure, Position=[0 0 0.5 1], Title="Lane Detections")); bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot); cam = axes(uipanel(currentFigure,Position=[0.5 0 0.5 1],Title="Camera View")); helperPlotDetectedLanesBEV(bep,cam,laneBoundaries,imds);
Track Lane Boundaries Using laneBoundaryTracker
If your lane boundary detections contain noise or they are not consistent, you can track them using laneBoundaryTracker
to get consistent boundaries.
Define a laneBoundaryTracker
object and specify these properties:
MeasurementNoise
— Specify asdiag([0.1,0.1,0.3]),
this sets the uncertainty in the lane boundary detections.FalseAlarmDensity
— Specify as0.001
, to define expected false positive detections.PostProcessingFcn
— Specify as thehelperPostProcessLaneBoundaries
function, to obtain tracked lane boundaries aslaneData
objects.
lbTracker = laneBoundaryTracker('MeasurementNoise', diag([0.1,0.1,0.3]), ... % Measurement noise covariance 'FalseAlarmDensity', 0.001, ... % Expected false positive detections 'PostProcessingFcn', @helperPostProcessLaneBoundaries ... % Function handle to customize the output of the tracker );
Specify the detection timestamps for the tracker. Note that the tracker requires timestamps in seconds, so you must convert the timestamps from the sensor from microseconds to seconds.
You exercise the tracker with lane boundary detections, corresponding timestamps, and specify the ShowProgress
name-value pair as true
to display a progress bar while tracking lane boundaries in batch mode.
% Load the timestamps generated by the sensor. timeStamps = double(monocamData.timeStamp); % The timestamps of the camera sensor are in microseconds. Convert to % seconds and offset from the first timestamp. tsecs = timeStamps*(10^-6); tsecs = tsecs - tsecs(1); % Track lane boundary detections. trackedLaneBoundaries = lbTracker(laneBoundaries,tsecs,ShowProgress=true);
Visualize and compare the lane boundaries before and after tracking.
currentFigure = figure(Name="Compare Lane Boundaries",Position=[0 0 1400 600]); hPlot = axes(uipanel(currentFigure,Position=[0 0 0.5 1],Title="Detected Boundaries")); bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot); hPlotSmooth = axes(uipanel(currentFigure,Position=[0.5 0 0.5 1],Title="Tracked Boundaries")); bepTracked = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlotSmooth); helperCompareLanes(bep,laneBoundaries,bepTracked,trackedLaneBoundaries);
You can use the laneData
object trackedLaneBoundaries
as a first input argument to the updateLaneSpec
function. Using this function, you can map the tracked lane boundaries to a standard-definition road network to create a high-definition road scene. For more information, see the Generate High Definition Scene from Lane Detections and OpenStreetMap example.
Display the tracked lane boundary data.
trackedLaneBoundaries
trackedLaneBoundaries = laneData with properties: TimeStamp: [714×1 double] LaneBoundaryData: {714×1 cell} LaneInformation: [714×5 struct] StartTime: 0 EndTime: 35.6422 NumSamples: 714
You can use the trackedLaneBoundaries
data in the Generate High Definition Scene from Lane Detections and OpenStreetMap example to generate ASAM OpenDRIVE® or Road Runner scene from these detections.
Correct Lane Boundaries Using Ground Truth Labeler for Scene Generation
The tracker output can sometimes generate inconsistent tracks due to several reasons, such as failure to remove all the noise from detected lane boundaries. To generate accurate high-definition road scene from lane detections, you must troubleshoot and resolve all the issues in detections. Use the Ground Truth Labeler app to visualize and correct the noisy detections.
The app needs a groundTruthMultisignal
object populated with the tracked lane boundary info. To convert the trackedLaneBoundaries
to groundTruthMultisignal
use the helper function helperLaneDataToLabels.
gTruth = helperLaneDataToLabels(trackedLaneBoundaries,sensorParams,imageFolderPath)
gTruth = groundTruthMultisignal with properties: DataSource: [1×1 vision.labeler.loading.ImageSequenceSource] LabelDefinitions: [10×7 table] ROILabelData: [1×1 vision.labeler.labeldata.ROILabelData] SceneLabelData: [0×0 vision.labeler.labeldata.SceneLabelData]
Open the Ground Truth Labeler app.
groundTruthLabeler(gTruth)
The following image shows the lane labels in the ground truth labeler app.
Using the app, you can move lane points, add new labels to missing detections or delete false labels. After resolving all the issues, you can export the data as a groundTruthMultisignal
object to the workspace and then generate a high-definition road scene. For more information on generating road scene using labeled data, see Generate RoadRunner Scene Using Labeled Camera Images and Raw Lidar Data example.
Helper Functions
helperPostProcessLaneBoundaries
— Customize tracker output and convert lane boundaries into laneData format.
helperLaneDataToLabels
— Process laneData and create a groundTruthMultisignal
object.
function trackedLanes = helperPostProcessLaneBoundaries(tracks,varargin) % helperExtractLaneBoundaries returns a laneData object containing lane % boundary detections. % Use default post processing function and customize its output in laneData % format. lbTracks = laneBoundaryTracker.postProcessParabolicLaneBoundaries(tracks,varargin); tsecs = varargin{1,4}; % Create an empty laneData object. trackedLanes = laneData; for i = 1:numel(lbTracks) boundaries = lbTracks{i}; for j = 1:numel(boundaries) info(j) = struct('TrackID',boundaries{j}.TrackID); %#ok<AGROW> end if ~isempty(boundaries) lbs = [boundaries{:}]; % Add boundaries to the laneData object. trackedLanes.addData(boundaries{1}.UpdateTime,[lbs.LaneBoundary],LaneInformation=info) else % Add boundaries to the laneData object. trackedLanes.addData(tsecs(i),{[]}) end end end function gTruth = helperLaneDataToLabels(ld, sensor, imseqFolder) % helperLaneDataToLabels returns a groundTruthMultisignal object from % laneData. % Read all the lane boundaries from the lane data object and get the % maximum number of lanes. allData = readData(ld,'all'); colNames = allData.Properties.VariableNames; maxLanes = sum(~endsWith(colNames,'Info',IgnoreCase=true)) - 1; % Create the image data source. imseqSource = vision.labeler.loading.ImageSequenceSource; sourceParams = struct; sourceParams.Timestamps = seconds(ld.TimeStamp); loadSource(imseqSource, imseqFolder, sourceParams) % Create the label definitions. ldc = labelDefinitionCreatorMultisignal; labelNames = []; for nl = 1:maxLanes labelName = "LaneBoundary"+num2str(nl); addLabel(ldc,labelName,labelType.Line); addAttribute(ldc, labelName, sprintf('BoundaryNumber'), 'Numeric', nl) labelNames = [labelNames, labelName]; end labelDefs = create(ldc); % Convert lane boundaries from vehicle frame to image frame. laneBoundaries = cell(height(allData),maxLanes); laneBoundariesInfo = cell(height(allData),maxLanes); for i=1:height(allData) boundaries = allData{i,2:maxLanes+1}; for j=1:length(boundaries) if ~isempty(boundaries{j}) p = boundaries{j}; x = linspace(p.XExtent(1),p.XExtent(2),4); y = computeBoundaryModel(p,x); imagePoints = vehicleToImage(sensor,[x' y']); laneBoundaries{i,j} = imagePoints; if ~isempty(ld.LaneInformation) laneInfo = allData{i,j+maxLanes+1}; laneBoundariesInfo{i,j} = laneInfo{1}.TrackID; else laneBoundariesInfo{i,j} = j; end end end end laneMarkerTruth = cell(height(allData),maxLanes); % Store the lane attributes in a struct. for i=1:height(allData) for j=1:maxLanes boundaryPt = struct; boundaryPt.Position = laneBoundaries{i,j}; boundaryPt.BoundaryNumber = laneBoundariesInfo{i,j}; laneMarkerTruth{i,j} = boundaryPt; end end % Create the ground truth multi signal object. labelData = table2timetable(cell2table(laneMarkerTruth, "VariableNames",labelNames),"RowTimes",seconds(ld.TimeStamp)); roiData = vision.labeler.labeldata.ROILabelData(imseqSource.SignalName, {labelData}); sceneData = vision.labeler.labeldata.SceneLabelData.empty; gTruth = groundTruthMultisignal(imseqSource,labelDefs,roiData, sceneData); end
See Also
monoCamera
| cameraIntrinsics
| trackerGNN
(Sensor Fusion and Tracking Toolbox) | singer
(Sensor Fusion and Tracking Toolbox) | getMapROI
| roadprops
| selectActorRoads
Related Topics
- Overview of Scenario Generation from Recorded Sensor Data
- Smooth GPS Waypoints for Ego Localization
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Generate RoadRunner Scene from Recorded Lidar Data
- Extract Vehicle Track List from Recorded Camera Data for Scenario Generation
- Generate Scenario from Actor Track List and GPS Data