Main Content

Perception Based Live Parking Spot Detection Using Unreal Engine Simulation


The Perception-Based Parking Spot Detection Using Unreal Engine Simulation example showed how to detect and classify parking spots using a side-mounted camera simulated in Unreal Engine® (UE), using these steps:

  1. Building a binary lanes map using the vehicle location and semantic segmentation provided as ground truth provided by the simulated sensors in the Unreal environment.

  2. Detecting lanes using Hough transform and analyzing the results to identify parking spots.

  3. Using the semantic segmentation if vehicles to determine if the detected parking spots are occupied.

This example will improve the previously proposed pipeline by:

  1. Replacing the use of the ground truth with algorithms for localization and parking spot detection, which will make the new approach more sophisticated and realistic. This will be done by introducing a front-facing stereo-camera and visual SLAM algorithm.

  2. Replacing the ground truth lane markers with a pre-trained CNN for semantic segmentation.

  3. Replacing Hough transform with RANSAC based line-fitting algorithm to improve the robustness of the line finding algorithm.

However, a few challenges arise when using SLAM and CNN's, mainly the drift in the camera localization and blurry line edges. This leads to accumulation of noise during the mapping of the line markers, resulting in large errors in the generated map which impact line detection. The two images below illustrate the errors in the generated global map using ideal ground truth information (left) and using visual SLAM with semantic segmentation (right).


The previous example relied on building a global map. However, you can avoid the issues discussed above by switching to building a local map, where the impact of error accumulation is reduced: This can be done by mapping the areas closest to the vehicle and restricting it to only a few keyframes each time by using a sliding window. You can then perform parking spot detection in this newly built local map, which can be interpreted "live" as the vehicle moves through the parking lot, instead of first traversing the entire section of the parking lot. Besides reducing the mapping and line detection errors, this approach is also more realistic and practical.

Eventually, the detected parking spot location can be refined further before attempting to execute the parking maneuver, to account for any drift in the vehicle's location.

The parking spot detection using local map building involves the following steps:

  • Collecting N side camera images and building a local line marker map using the front stereo camera SLAM results.


  • Detecting parking line markers and vehicles within the local map area using deep learning based semantic segmentation.


  • Analyzing the semantic segmentation results to determine if a parking spot is present and if it is occupied.


The example stops when an available parking spot is found.

Although the parking spot detection algorithm will be invoked more times than in Perception-Based Parking Spot Detection Using Unreal Engine Simulation (which can lead to a slowdown in processing), the processing performance will be enhanced due to by the small size of the local maps that we need to process each time.

Construct Parking Lot Simulation


Use the Simulation 3D Scene Configuration block to set up the simulation environment. Select the built-in Large Parking Lot scene, which contains several parked vehicles. Set up an ego vehicle moving along the specified reference path by using the Simulation 3D Vehicle with Ground Following block. This example uses a prerecorded reference trajectory. You can specify a trajectory interactively by selecting a sequence of waypoints. For more information, see the Select Waypoints for Unreal Engine Simulation example.

% Load reference path data
refPoses = load("refPoses.mat");

After adding the ego vehicle, you can attach a camera sensor to it using the Simulation 3D Camera block. In this example, the main camera used to map the environment is mounted on the left mirror of the ego vehicle with a rotation offset to point to the side of the vehicle. In addition to that, a stereo camera is mounted on the front of the vehicle for use with SLAM. You can use the Camera Calibrator app to estimate intrinsic of the actual camera that you want to simulate.

The parked cars are randomly placed in the parking lot, while making sure to leave at least three available parking spots.

% Open the model
modelName = "ParkingLaneMarkingsDetection";

% Set side camera intrinsic parameters
focalLength    = [1109 1109]; % In pixels
principalPoint = [401 401];   % In pixels
imageSize      = [801 801];   % In pixels

% Set stereo camera intrinsic parameters
focalLengthStereo    = [1109 1109];  % In pixels
principalPointStereo = [640 360];  % In pixels
imageSizeStereo      = [720 1280]; % In pixels
baseline             = 0.5; % In meters

% Randomly populate the parking lot with vehicles
idxFree = helperAddParkedVehicles(modelName, parkedPoses);


Find a Free Parking Spot

Using the front stereo and side-mounted camera, the constructLocalMap MATLAB Function block in the ParkingLaneMarkingsDetection model implements the algorithm to build a local map, using these steps:

  1. Detect parking lane markings and parked vehicles using semantic segmentation. This example uses a pre-trained network to segment only road markings and vehicles using the left camera image. For more information on how to train your own semantic segmentation network in the Develop a Neural Network for Camera Semantic Prediction Using Unreal Engine example.

  2. Transform line and vehicle detections from the image coordinates to the vehicle coordinates by applying a projective transformation using the transformImage method of the birdsEyeView object.

  3. Transform detections from the local vehicle coordinates to the world coordinates using vehicle odometry. This example relies on the odometry provided by the stereo camera-based SLAM system implemented in HelperStereoVisualSLAMSystem. See the Stereo Visual Simultaneous Localization and Mapping example for the details of how to implement a stereo visual SLAM system.

  4. Build a bird's-eye-view local map of the parking lot by incrementally merging the detections in the world coordinates using a sliding window. This keeps the size of the local map bounded, which itself consists of two layers: lineMarkings and parkedVehicles. parkedVehicles contains the parked vehicles in the scene, representing obstacles. lineMarkings contains the parking lane markings used to determine the locations of parking spots.

After a limited number of frames have been accumulated to build a local map, the parking spot detection algorithm is invoked to check if a free spot is available. Once it successfully detects a free spot, the simulation is stopped, and the parking spot found is displayed.

The parking spot detection algorithm consists of the following steps:

  1. Detect vertical and horizontal lines in the local map using the RANSAC algorithm.

  2. Find the intersections and endpoints of the detected lines.

  3. Construct groups of 4 points using nearest neighbors.

  4. Check if there is a group of points that satisfies a set of conditions to be a rectangle by checking for the lengths of vertices, inside angles and surface area.

For more information on the parking spot detection algorithm, please take a look at Perception-Based Parking Spot Detection Using Unreal Engine Simulation where the helper functions used in constructLocalMap are explained.

if ismac
    error(["3D Simulation is supported only on Microsoft", char(174), " Windows", char(174), "."]);

% Simulate the model

Now that the simulation is done, we can visualize the final results.


% Close the model

Helper Functions

Helper functions used in this example are included in separate files. These are the core functions used by the parking lot detection algorithm.

helperFindLinesRANSAC finds lines from semantic segmentation results using RANSAC.

helperFindLineIntersections finds the intersection points of two lines.

helperGetVertices constructs a set of vertices using the endpoints of the lines and their intersections.

helperFindParkingSpots finds parking spots constructed by parking lines.

helperFindUnqiueLanes calculates a set of features for each lane to remove redundant ones.

helperDisplayFinalResults displays the detected parking spot in top view.