Quantization Workflow Prerequisites
This page describes the products required to quantize, simulate, and deploy deep learning networks using Deep Learning Toolbox Model Quantization Library. The prerequisites required depend on your selections at each stage of the quantization workflow.
Prerequisites for All Quantization Workflows
The following requirements apply to all stages of the quantization workflow.
Deep Learning Toolbox™
Supported Networks and Layers
The following links describe the networks and layers supported for each execution environment.
GPU — Supported Networks, Layers, and Classes (GPU Coder)
FPGA — Supported Networks, Layers, Boards, and Tools (Deep Learning HDL Toolbox)
CPU — Networks and Layers Supported for Code Generation (MATLAB Coder)
MATLAB — Networks and Layers Supported for Code Generation (MATLAB Coder)
When the Execution Environment is set to MATLAB, only the layers for the Intel MKL-DNN deep learning library are supported.
Prerequisites for Calibration
The prerequisites for calibration depend on your selection of calibration environment.
Calibrate on host GPU (default) —
Parallel Computing Toolbox™
GPU Coder™ Interface for Deep Learning
CUDA® enabled NVIDIA® GPU with compute capability 3.2 or higher.
Calibrate on host CPU —
MATLAB® Coder™ Interface for Deep Learning
On Windows®, the
MinGW C/C++ compiler is not supported. Use
Microsoft Visual C++ 2019,
Microsoft Visual C++
Microsoft Visual C++ 2015.
On Linux®, use a GCC C/C++ compiler.
For a list of supported compilers, see Supported and Compatible Compilers.
Prerequisites for Quantization
To quantize your network for simulation in MATLAB using the
quantize function or the
Export > Export Quantized Network option in the Deep
Network Quantize app, no additional prerequisites are required.
Prerequisites for Validation
The following are required to validate your quantized network for deployment using the
validate function or the Quantize and
Validate button in the Deep Network Quantizer app.
|Execution Environment||Prerequisites for Validation|
For the FPGA execution environment, you can choose to
validate your quantized network using simulation when you set the
Simulate property of
'on'. This option requires only Deep Learning HDL Toolbox.
For CPU and GPU deployment, the software generates code for a convolutional deep
neural network by quantizing the weights, biases, and activations of the convolution
layers to 8-bit scaled integer data types. The quantization is performed by providing
the calibration result file produced by the
function to the
codegen (MATLAB Coder) command.
Code generation does not support quantized deep neural networks produced by the