Automated Driving System Toolbox™ provides algorithms and tools for designing and testing ADAS and autonomous driving systems. You can automate ground-truth labeling, generate synthetic sensor data for driving scenarios, perform multisensor fusion, and design and simulate vision systems.
For open-loop testing, the system toolbox provides a customizable workflow app and evaluation tools that let you automate labeling of ground truth and test your algorithms against ground truth. For HIL and desktop simulation of sensor fusion and control logic, you can generate driving scenarios and simulate object lists from radar and camera sensors.
Automated Driving System Toolbox supports multisensor fusion development with Kalman filters, assignment algorithms, motion models, and a multiobject tracking framework. Algorithms for vision system design include lane marker detection, vehicle detection with machine learning, including deep learning, and image-to-vehicle coordinate transforms.
Learn the basics of Automated Driving System Toolbox
Camera sensor configuration, image-to-vehicle coordinate system transform, bird’s-eye-view image transform
Interactive ground truth labeling for object detection, semantic segmentation, and image classification
Object and lane boundary detections using machine learning and deep learning, lidar processing
Object tracking and multisensor fusion, bird’s-eye plot of detections and object tracks
Test driving algorithms using generated scenarios and synthetic detections from radar and camera sensor models
Path planning, costmaps, geographic map display, vehicle controllers