Preprocess Lane Detections for Scenario Generation
This example shows how to preprocess lane detection data to organize it into a format supported by Scenario Builder for Automated Driving Toolbox™.
Scenario Builder for Automated Driving Toolbox provides workflows to generate high-definition (HD) scenes from lane detections. For more information on generating a scene from lane detections, see the Generate High Definition Scene from Lane Detections example.
In this example, you organize detected lane boundary points into the table format required by the updateLaneSpec
function.
Desired Format of Lane Boundary Detections
The updateLaneSpec
function accepts lane detection input as an M
-by-N
table. M
is the number of lane detection samples and N-1
is the maximum number of detected lane boundaries across all samples. The lane detections must follow the vehicle coordinate system used by Automated Driving Toolbox. As such, the Y-coordinate values for the lane boundary points to the left of the ego vehicle must be positive values and the lane boundary points to the right must be negative values. For more information on the vehicle coordinate system, see Coordinate Systems in Automated Driving Toolbox.
The table must specify at least three columns: the first column for timestamps and the remaining columns for the lane boundaries of the ego lane. You must specify detected lane boundaries as parabolicLaneBoundary
, cubicLaneBoundary
, or clothoidLaneBoundary
objects. The detected lane boundaries must be in left-to-right order, with respect to the travel direction of the ego vehicle, as shown in this image. The image shows a camera frame with overlaid lane boundary detections, and the data organization for the corresponding lane boundary data.
Inspect Lane Detection Data
This example shows how to generate lane boundary models from lane boundary points represented in the vehicle coordinate system. If your data specifies pixel locations for detected lane boundary points, you must convert them to vehicle coordinate system by using the imageToVehicle
function.
This example uses raw lane detection data stored as a column vector of structures. Load the recorded lane detection data into the workspace, and display the fields of the first structure in the data.
load("LaneDetectionCoordinates.mat")
laneDetectionCoordinates(1)
ans = struct with fields:
timeStamp: 1461634404927776
left: [1×1 struct]
right: [1×1 struct]
secondLeft: [1×1 struct]
secondRight: [1×1 struct]
Each element of the vector is a structure with five fields. The first field contains the timestamp, and the remaining fields contains data for four lane boundaries. The left
and right
fields contains data for the left and right boundaries, respectively, of the ego lane. The secondLeft
field contains data for the lane boundary to the left of the left ego lane boundary. The secondRight
fields contains data for the lane boundary to the right of the right ego lane boundary.
The data for each lane boundary consists of a structure with two fields: lane boundary points and the boundary type.
Display the structure of the left boundary data for the first timestamp.
laneDetectionCoordinates(1).left
ans = struct with fields:
Points: [11×2 double]
BoundaryType: 1
You can use the findParabolicLaneBoundaries
function to generate a parabolicLaneBoundary
lane boundary model directly from the lane detection points. However, to generate a cubic or clothoid lane boundary model, you must first use the fitPolynomialRANSAC
function to find the polynomial coefficients of the model, and then use the cubicLaneBoundary
or clothoidLaneBoundary
object, respectively. This example shows steps to generate cubic lane boundary model.
Find Coefficients of Lane Boundary Models
Find the cubic polynomial coefficients that fit the detected lane boundary points by using the fitPolynomialRANSAC
function. Specify a degree of 3
and the maxDistance
as half of the approximate boundary width. For this example, specify the approximate boundary width as 0.15 meters.
approxBoundaryWidth = 0.15; Left = {}; Right = {}; SecondLeft = {}; SecondRight = {}; for i = 1:length(laneDetectionCoordinates) Left{i} = fitPolynomialRANSAC(laneDetectionCoordinates(i).left.Points,3,approxBoundaryWidth/2); Right{i} = fitPolynomialRANSAC(laneDetectionCoordinates(i).right.Points,3,approxBoundaryWidth/2); SecondLeft{i} = fitPolynomialRANSAC(laneDetectionCoordinates(i).secondLeft.Points,3,approxBoundaryWidth/2); SecondRight{i} = fitPolynomialRANSAC(laneDetectionCoordinates(i).secondRight.Points,3,approxBoundaryWidth/2); end
The function returns the cubic polynomial coefficients required to generate the cubicLaneBoundary
model.
Generate Lane Boundary Models
Generate a cubic lane boundary model from the cubic polynomial coefficients by using the cubicLaneBoundary
object.
for i = 1:length(laneDetectionCoordinates) Left{i} = cubicLaneBoundary(Left{i}); Right{i} = cubicLaneBoundary(Right{i}); SecondLeft{i} = cubicLaneBoundary(SecondLeft{i}); SecondRight{i} = cubicLaneBoundary(SecondRight{i}); end
By default, the cubicLaneBoundary
object returns lane boundaries with a BoundaryType
value of Solid
. Update the types of lane boundaries by using the recorded lane detection data.
for i = 1:length(laneDetectionCoordinates) Left{i}.BoundaryType = laneDetectionCoordinates(i).left.BoundaryType; end
Organize Data into Table Format
Organize the generated lane boundary models into the table format required by the updateLaneSpec
function.
laneDetections = table({laneDetectionCoordinates.timeStamp}', ... SecondLeft',Left',Right',SecondRight' , ... VariableNames=["Timestamp","Boundary1","Boundary2","Boundary3","Boundary4"]);
Display the first five entries of the laneDetections
table.
laneDetections(1:5,:)
ans=5×5 table
Timestamp Boundary1 Boundary2 Boundary3 Boundary4
____________________ _______________________ _______________________ _______________________ _______________________
{[1461634404927776]} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary}
{[1461634404977663]} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary}
{[1461634405027158]} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary}
{[1461634405078119]} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary}
{[1461634405127767]} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary} {1×1 cubicLaneBoundary}
Handle Variable Number of Lanes
If the number of lanes in the recorded data changes over time, the number of columns in the table must change to match the maximum number of detected lane boundaries across all samples. You can represent missing lane detections by using empty cells, as shown in this image. The image shows a road, on which the ego vehicle has traveled, and the corresponding lane boundary detections. Notice that there are four lane boundary detections for first few timestamps, but that the fifth column of the table has empty cells for the later timestamps because the number of detected lane boundaries drops to three.
See Also
updateLaneSpec
| imageToVehicle
| findParabolicLaneBoundaries
| cubicLaneBoundary
| parabolicLaneBoundary
Related Topics
- Overview of Scenario Generation from Recorded Sensor Data
- Smooth GPS Waypoints for Ego Localization
- Extract Lane Information from Recorded Camera Data for Scene Generation
- Generate High Definition Scene from Lane Detections
- Extract Vehicle Track List from Recorded Camera Data for Scenario Generation
- Generate Scenario from Actor Track List and GPS Data