Choose SLAM Workflow Based on Sensor Data
You can use Computer Vision Toolbox™, Navigation Toolbox™, and Lidar Toolbox™ for Simultaneous Localization and Mapping (SLAM). SLAM is widely used in applications including automated driving, robotics, and unmanned aerial vehicles (UAV). To learn more about SLAM, see What is SLAM?.
Choose SLAM Workflow
To choose the right SLAM workflow for your application, consider what type of sensor data you are collecting. MATLAB® support SLAM workflows that use images from a monocular or stereo camera system, or point cloud data including 2-D and 3-D lidar data.
Visual SLAM
There are several approaches to visual SLAM, each suited to different sensors and applications. To compute 3-D points, you need to measure the depth from the camera to points on the 3-D object. Monocular, stereo, and RGB-D sensors provide this depth information using different techniques.
Monocular SLAM — Single camera to estimate motion over time. However, it cannot determine absolute scale, such as the true size or distance of objects. You can use the
monovslamobject to estimate a camera's trajectory and reconstruct a sparse 3-D map from a sequence of monocular images.
Stereo visual SLAM — Stereo camera setup to derive depth from disparities between images, providing absolute scale. You can use the
stereovslamobject to estimate camera motion and build a scaled 3-D map using synchronized stereo image pairs.
RGB-D visual SLAM — Depth-sensing camera captures both color and depth information, providing precise, per-pixel depth measurements. You can use the
rgbdvslamobject to track camera pose and construct a dense 3-D map by combining RGB and depth data.
Visual-inertial SLAM — Fuses visual data with motion information from an inertial measurement unit (IMU), enhancing robustness and accuracy, particularly during rapid movements or in visually challenging environments. You can use vSLAM objects to fuse IMU measurements with mono, stereo and RGB-D data.
This table summarizes the key features available for visual SLAM.
| Sensor Data | Features | Topics | Examples | Toolbox | Code Generation |
|---|---|---|---|---|---|
Monocular images |
|
| Performant and deployable workflows:
Modular workflows:
|
|
|
Stereo images |
|
| Performant and deployable workflows:
Modular workflows:
|
|
|
RGB-D images |
|
| Performant and deployable workflows:
Modular workflows:
|
|
|
Point Cloud, 2-D, and 3-D Lidar SLAM
This table summarizes the key features available for point cloud data including 2-D and 3-D lidar SLAM.
| Sensor Data | Features | Topics | Examples | Toolbox | Code Generation |
|---|---|---|---|---|---|
2-D lidar scans |
|
|
|
|
|
Point cloud data |
|
|
|
|
|
3-D lidar scans | Feature-based:
|
|
|
|
|