As deep neural networks become part of engineered systems, particularly safety-critical applications, it is crucial to ensure their reliability and robustness. Deep Learning Toolbox Verification Library lets you rigorously assess and test deep neural networks.
With Deep Learning Toolbox Verification Library, you can:
- Verify properties of your deep neural network such as robustness to adversarial examples
- Estimate how sensitive your network predictions are to input perturbations
- Create a distribution discriminator that separates data into in- and out-of-distribution for runtime monitoring
- Deploy a runtime monitoring system that oversees network performance with your network
- Walk through a case study to verify an airborne deep learning system
Verify Deep Neural Network Robustness for Classification
Boost your network’s robustness against adversarial examples (subtly altered inputs designed to mislead the network) using formal methods. This approach allows testing an infinite collection of inputs, proving prediction consistency despite perturbations and guiding training enhancements that enable you to improve the network’s reliability and accuracy.
Estimate Deep Neural Network Output Bounds for Regression
Estimate the lower and upper output bounds of your network given input ranges using formal methods. This process enables you to gain insights into the network’s potential outputs for given input perturbations, ensuring reliable performance in scenarios such as control systems, signal processing, and more.
Build Safe Deep Learning Systems with Runtime Monitoring
Incorporate runtime monitoring with out-of-distribution detection to build safe deep learning systems. Continuously evaluating if incoming data aligns with training data can help you decide whether to trust the network’s output or redirect it for safe handling, enhancing system safety and reliability.
Case Study: Verifying an Airborne Deep Learning System
Explore a case study to verify an airborne deep learning system in line with aviation industry standards such as DO-178C, ARP4754A, and prospective EASA and FAA guidelines. This case study provides a comprehensive view of the steps necessary to fully comply with industry standards and guidelines for deep learning systems.
Constrained Deep Learning
Constrained deep learning is an advanced approach to training deep neural networks by incorporating domain-specific constraints into the learning process. By integrating these constraints into the construction and training of neural networks, you can guarantee desirable behavior in safety-critical scenarios where such guarantees are paramount.
Explainability
Description: Understand the decision-making process of your network by using explainability techniques. Leverage methods such as the detector randomized input sampling for explanation (D-RISE) algorithm to compute saliency maps for object detectors and visualize the specific regions within the input data that are most influential in the network’s predictions.