Clear Filters
Clear Filters

Project LiDAR point to Image and get depth

66 views (last 30 days)
James Heaton
James Heaton on 2 Nov 2021
Answered: Sanchari on 25 Apr 2024
Hi, I am looking into depth perception and I have calibrated a camera with a LiDAR. I can use the function ProjectLidarPointsOnImage which gives me the desired projection of the points onto the image, but there is no depth associated with these points. How can I get the depth from these projected points? It seems strange to me that there is no mention of depth anywhere in the documentation for this function as why else would you project lidar points onto an image? I can get the indices from the function, but then I would just use these to index into the original point cloud, and these would be depths in relation to the LiDAR, not the camera frame so I am failing to see how this function projectLidarPointsOnImage helps at all.

Answers (1)

Sanchari
Sanchari on 25 Apr 2024
Hello James,
The “projectLidarPointsOnImage” function in MATLAB, as part of the Automated Driving Toolbox, is primarily used to project LiDAR point cloud data onto a 2D image plane. This is useful for sensor fusion applications where understanding the spatial relationship between objects seen in camera images and those detected by LiDAR is crucial. However, you're correct that the function itself doesn't directly provide depth information in the camera's coordinate system.
The purpose of this projection is to allow you to correlate the LiDAR points with their corresponding pixels in the image. This is indeed useful for tasks such as object detection, where you might want to use the richer texture information available in the image to classify objects detected by the LiDAR. However, obtaining depth information relative to the camera frame requires a bit more work.
Here's a general approach to get depth information for the projected points in the camera's frame of reference:
  1. Projection and Indexing: When you project the LiDAR points onto the camera's image plane using projectLidarPointsOnImage, you can obtain the indices of the LiDAR points that correspond to pixels in the image. This is your starting point.
  2. Extracting Depth Information: The depth information of each LiDAR point (in the LiDAR's coordinate system) is typically the distance from the LiDAR sensor to the point along the LiDAR's z-axis. When you have the indices of the LiDAR points that project onto the camera's image plane, you indeed have the depth information, but it's in the LiDAR's coordinate system.
  3. Transforming Depth to the Camera's Coordinate System: To make this depth information useful in the context of the camera's coordinate system, you need to perform a coordinate system transformation. This transformation requires knowing the extrinsic calibration parameters between the LiDAR and camera, which describe the rotation and translation needed to align the two coordinate systems.
If T is the transformation matrix from the LiDAR's coordinate system to the camera's coordinate system, and p_lidar is a point in the LiDAR's coordinate system, then the point in the camera's coordinate system p_camera can be found by:
p_camera = T * [p_lidar; 1];
This transformation gives you the position of the LiDAR points in the camera's coordinate frame, from which you can directly read off depth as the z-coordinate (assuming the camera's z-axis points forward from the camera).
This process requires you to have the extrinsic calibration parameters between the LiDAR and camera, which are typically obtained through a calibration procedure.
You can also refer to the following file in MathWorks File Exchange about DenseDepthMap: https://in.mathworks.com/matlabcentral/fileexchange/68587-densedepthmap
Hope this helps!

Categories

Find more on Labeling, Segmentation, and Detection in Help Center and File Exchange

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!