dlquantizer
Quantize a deep neural network to 8-bit scaled integer data types
Description
Use the dlquantizer
object to reduce the memory requirement of a
deep neural network by quantizing weights, biases, and activations to 8-bit scaled integer
data types. You can create and verify the behavior of a quantized network for GPU, FPGA, CPU
deployment, or explore the quantized network in MATLAB®.
For CPU and GPU deployment, the software generates code for a convolutional deep neural
network by quantizing the weights, biases, and activations of the convolution layers to 8-bit
scaled integer data types. The quantization is performed by providing the calibration result
file produced by the calibrate
function
to the codegen
(MATLAB Coder) command.
Code generation does not support quantized deep neural networks produced by the quantize
function.
This object requires Deep Learning Toolbox Model Quantization Library. To learn about the products required to quantize a deep neural network, see Quantization Workflow Prerequisites.
Creation
Description
Input Arguments
net
— Pretrained neural network
DAGNetwork
object | dlnetwork
object | SeriesNetwork
object | yolov2ObjectDetector
object | ssdObjectDetector
object
Pretrained neural network, specified as a DAGNetwork
,
dlnetwork
, SeriesNetwork
, yolov2ObjectDetector
(Computer Vision Toolbox), or a ssdObjectDetector
(Computer Vision Toolbox) object.
Quantization of yolov2ObjectDetector
(Computer Vision Toolbox) and ssdObjectDetector
(Computer Vision Toolbox) networks requires a GPU Coder™ license.
Properties
NetworkObject
— Pretrained neural network
DAGNetwork
object | dlnetwork
object | SeriesNetwork
object | yolov2ObjectDetector
object | ssdObjectDetector
object
This property is read-only.
Pre-trained neural network, specified as a DAGNetwork
,
dlnetwork
, SeriesNetwork
,
yolov2ObjectDetector
(Computer Vision Toolbox), or a ssdObjectDetector
(Computer Vision Toolbox) object.
ExecutionEnvironment
— Execution environment
'GPU'
(default) | 'FPGA'
| 'CPU'
| 'MATLAB'
Execution environment for the quantized network, specified as
'GPU'
, 'FPGA'
, 'CPU'
, or
'MATLAB'
. How the network is quantized depends on the choice of
execution environment.
The 'MATLAB'
execution environment indicates a target-agnostic
quantization of the neural network will be performed. This option does not require you
to have target hardware in order to explore the quantized network in MATLAB.
Example: 'ExecutionEnvironment'
,'FPGA'
Simulation
— Validate behavior of network quantized for FPGA environment using simulation
'off'
(default) | 'on'
Whether to validate behavior of network quantized for FPGA using simulation, specified as one of these values:
'on'
— Validate the behavior of the quantized network by simulating the quantized network in MATLAB and comparing the prediction results of the original single-precision network to the simulated prediction results of the quantized network.'off'
— Generate code and validate the behavior of the quantized network on the target hardware.
Note
This option is only valid when ExecutionEnvironment is set to 'FPGA'
.
Note
Alternatively, you can use the quantize
method to create a simulatable quantized network. The simulatable quantized network
enables visibility of the quantized layers, weights, and biases of the network, as
well as simulatable quantized inference behavior.
Example: 'Simulation'
, 'on'
Object Functions
Examples
Quantize a Neural Network for GPU Target
This example shows how to quantize learnable parameters in the convolution layers of a neural network for GPU and explore the behavior of the quantized network. In this example, you quantize the squeezenet neural network after retraining the network to classify new images according to the Train Deep Learning Network to Classify New Images example. In this example, the memory required for the network is reduced approximately 75% through quantization while the accuracy of the network is not affected.
Load the pretrained network. net
is the output network of the Train Deep Learning Network to Classify New Images example.
load squeezenetmerch
net
net = DAGNetwork with properties: Layers: [68×1 nnet.cnn.layer.Layer] Connections: [75×2 table] InputNames: {'data'} OutputNames: {'new_classoutput'}
Define calibration and validation data to use for quantization.
The calibration data is used to collect the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. For the best quantization results, the calibration data must be representative of inputs to the network.
The validation data is used to test the network after quantization to understand the effects of the limited range and precision of the quantized convolution layers in the network.
In this example, use the images in the MerchData
data set. Define an augmentedImageDatastore
object to resize the data for the network. Then, split the data into calibration and validation data sets.
unzip('MerchData.zip'); imds = imageDatastore('MerchData', ... 'IncludeSubfolders',true, ... 'LabelSource','foldernames'); [calData, valData] = splitEachLabel(imds, 0.7, 'randomized'); aug_calData = augmentedImageDatastore([227 227], calData); aug_valData = augmentedImageDatastore([227 227], valData);
Create a dlquantizer
object and specify the network to quantize.
quantObj = dlquantizer(net);
Define a metric function to use to compare the behavior of the network before and after quantization. This example uses the hComputeModelAccuracy
metric function.
function accuracy = hComputeModelAccuracy(predictionScores, net, dataStore) %% Computes model-level accuracy statistics % Load ground truth tmp = readall(dataStore); groundTruth = tmp.response; % Compare with predicted label with actual ground truth predictionError = {}; for idx=1:numel(groundTruth) [~, idy] = max(predictionScores(idx,:)); yActual = net.Layers(end).Classes(idy); predictionError{end+1} = (yActual == groundTruth(idx)); %#ok end % Sum all prediction errors. predictionError = [predictionError{:}]; accuracy = sum(predictionError)/numel(predictionError); end
Specify the metric function in a dlquantizationOptions
object.
quantOpts = dlquantizationOptions('MetricFcn',{@(x)hComputeModelAccuracy(x, net, aug_valData)});
Use the calibrate
function to exercise the network with sample inputs and collect range information. The calibrate
function exercises the network and collects the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. The function returns a table. Each row of the table contains range information for a learnable parameter of the optimized network.
calResults = calibrate(quantObj, aug_calData)
calResults=121×5 table
Optimized Layer Name Network Layer Name Learnables / Activations MinValue MaxValue
____________________________ ____________________ ________________________ _________ ________
{'conv1_Weights' } {'conv1' } "Weights" -0.91985 0.88489
{'conv1_Bias' } {'conv1' } "Bias" -0.07925 0.26343
{'fire2-squeeze1x1_Weights'} {'fire2-squeeze1x1'} "Weights" -1.38 1.2477
{'fire2-squeeze1x1_Bias' } {'fire2-squeeze1x1'} "Bias" -0.11641 0.24273
{'fire2-expand1x1_Weights' } {'fire2-expand1x1' } "Weights" -0.7406 0.90982
{'fire2-expand1x1_Bias' } {'fire2-expand1x1' } "Bias" -0.060056 0.14602
{'fire2-expand3x3_Weights' } {'fire2-expand3x3' } "Weights" -0.74397 0.66905
{'fire2-expand3x3_Bias' } {'fire2-expand3x3' } "Bias" -0.051778 0.074239
{'fire3-squeeze1x1_Weights'} {'fire3-squeeze1x1'} "Weights" -0.7712 0.68917
{'fire3-squeeze1x1_Bias' } {'fire3-squeeze1x1'} "Bias" -0.10138 0.32675
{'fire3-expand1x1_Weights' } {'fire3-expand1x1' } "Weights" -0.72035 0.9743
{'fire3-expand1x1_Bias' } {'fire3-expand1x1' } "Bias" -0.067029 0.30425
{'fire3-expand3x3_Weights' } {'fire3-expand3x3' } "Weights" -0.61443 0.7741
{'fire3-expand3x3_Bias' } {'fire3-expand3x3' } "Bias" -0.053613 0.10329
{'fire4-squeeze1x1_Weights'} {'fire4-squeeze1x1'} "Weights" -0.7422 1.0877
{'fire4-squeeze1x1_Bias' } {'fire4-squeeze1x1'} "Bias" -0.10885 0.13881
⋮
Use the validate
function to quantize the learnable parameters in the convolution layers of the network and exercise the network. The function uses the metric function defined in the dlquantizationOptions
object to compare the results of the network before and after quantization.
valResults = validate(quantObj, aug_valData, quantOpts)
valResults = struct with fields:
NumSamples: 20
MetricResults: [1×1 struct]
Statistics: [2×2 table]
Examine the validation output to see the performance of the quantized network.
valResults.MetricResults.Result
ans=2×2 table
NetworkImplementation MetricOutput
_____________________ ____________
{'Floating-Point'} 1
{'Quantized' } 1
valResults.Statistics
ans=2×2 table
NetworkImplementation LearnableParameterMemory(bytes)
_____________________ _______________________________
{'Floating-Point'} 2.9003e+06
{'Quantized' } 7.3393e+05
In this example, the memory required for the network was reduced approximately 75% through quantization. The accuracy of the network is not affected.
The weights, biases, and activations of the convolution layers of the network specified in the dlquantizer object now use scaled 8-bit integer data types.
Quantize a Neural Network for FPGA Target
This example shows how to quantize learnable parameters in the
convolution layers of a neural network and explore the behavior of the quantized
network. In this example, you quantize the logo recognition network
(LogoNet
). Quantization helps reduce the memory requirement of a
deep neural network by quantizing weights, biases and activations of network layers to
8-bit scaled integer data types. Use MATLAB® to retrieve the prediction results from the
target device.
This example uses the products listed under FPGA in Quantization Workflow Prerequisites.
Create a file in your current working directory called
getLogoNetwork.m
. Enter these lines into the file:
function net = getLogoNetwork() data = getLogoData(); net = data.convnet; end function data = getLogoData() if ~isfile('LogoNet.mat') url = 'https://www.mathworks.com/supportfiles/gpucoder/cnn_models/logo_detection/LogoNet.mat'; websave('LogoNet.mat',url); end data = load('LogoNet.mat'); end
Load the pretrained network.
snet = getLogoNetwork();
snet = SeriesNetwork with properties: Layers: [22×1 nnet.cnn.layer.Layer] InputNames: {'imageinput'} OutputNames: {'classoutput'}
Define calibration and validation data to use for quantization.
The calibration data is used to collect the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. For the best quantization results, the calibration data must be representative of inputs to the network.
The validation data is used to test the network after quantization to understand the effects of the limited range and precision of the quantized convolution layers in the network.
This example uses the images in the logos_dataset
data set.
Define an imageDatastore
, then split the data into calibration and
validation data sets.
curDir = pwd; newDir = fullfile(matlabroot,'examples','deeplearning_shared','data','logos_dataset.zip'); copyfile(newDir,curDir); unzip('logos_dataset.zip'); imageData = imageDatastore(fullfile(curDir,'logos_dataset'),... 'IncludeSubfolders',true,'FileExtensions','.JPG','LabelSource','foldernames'); [calibrationData,validationData] = splitEachLabel(imageData,0.5,'randomized');
Create a dlquantizer
object and specify the network to quantize.
Set the execution environment for the quantized network to FPGA.
dlQuantObj = dlquantizer(snet,'ExecutionEnvironment','FPGA');
Use the calibrate
function to exercise the network with sample
inputs and collect range information. The calibrate
function
exercises the network and collects the dynamic ranges of the weights and biases in
the convolution and fully connected layers of the network and the dynamic ranges of
the activations in all layers of the network. The function returns a table. Each row
of the table contains range information for a learnable parameter of the optimized
network.
dlQuantObj.calibrate(calibrationData)
ans = Optimized Layer Name Network Layer Name Learnables / Activations MinValue MaxValue ____________________________ __________________ ________________________ ___________ __________ {'conv_1_Weights' } {'conv_1' } "Weights" -0.048978 0.039352 {'conv_1_Bias' } {'conv_1' } "Bias" 0.99996 1.0028 {'conv_2_Weights' } {'conv_2' } "Weights" -0.055518 0.061901 {'conv_2_Bias' } {'conv_2' } "Bias" -0.00061171 0.00227 {'conv_3_Weights' } {'conv_3' } "Weights" -0.045942 0.046927 {'conv_3_Bias' } {'conv_3' } "Bias" -0.0013998 0.0015218 {'conv_4_Weights' } {'conv_4' } "Weights" -0.045967 0.051 {'conv_4_Bias' } {'conv_4' } "Bias" -0.00164 0.0037892 {'fc_1_Weights' } {'fc_1' } "Weights" -0.051394 0.054344 {'fc_1_Bias' } {'fc_1' } "Bias" -0.00052319 0.00084454 {'fc_2_Weights' } {'fc_2' } "Weights" -0.05016 0.051557 {'fc_2_Bias' } {'fc_2' } "Bias" -0.0017564 0.0018502 {'fc_3_Weights' } {'fc_3' } "Weights" -0.050706 0.04678 {'fc_3_Bias' } {'fc_3' } "Bias" -0.02951 0.024855 {'imageinput' } {'imageinput'} "Activations" 0 255 {'imageinput_normalization'} {'imageinput'} "Activations" -139.34 198.72
Create a target object with a custom name for your target device and an interface to connect your target device to the host computer.
hTarget = dlhdl.Target('Intel','Interface','JTAG');
Define a metric function to use to compare the behavior of the network before and after quantization. Save this function in a local file.
function accuracy = hComputeModelAccuracy(predictionScores,net,dataStore) %% hComputeModelAccuracy test helper function computes model level accuracy statistics % Copyright 2020 The MathWorks, Inc. % Load ground truth groundTruth = dataStore.Labels; % Compare predicted label with ground truth predictionError = {}; for idx=1:numel(groundTruth) [~, idy] = max(predictionScores(idx,:)); yActual = net.Layers(end).Classes(idy); predictionError{end+1} = (yActual == groundTruth(idx)); %#ok end % Sum all prediction errors. predictionError = [predictionError{:}]; accuracy = sum(predictionError)/numel(predictionError); end
Specify the metric function and FPGA execution environment options in a
dlquantizationOptions
object.
options = dlquantizationOptions('MetricFcn', ... {@(x)hComputeModelAccuracy(x,snet,validationData)},'Bitstream','arria10soc_int8',... 'Target',hTarget);
Compile and deploy the quantized network. Use the validate
function to quantize the learnable parameters in the convolution layers of the
network and exercise the network. This function uses the output of the compile
function to program the FPGA board by using the programming file. It also downloads
the network weights and biases. The deploy function checks for the Intel®
Quartus® tool and the supported tool version. It then programs the FPGA device
using the sof file, displays progress messages, and the time it takes to deploy the
network. The validate
function uses the metric function defined in
the dlquantizationOptions
object to compare the results of the
network before and after quantization.
prediction = dlQuantObj.validate(validationData,options);
offset_name offset_address allocated_space _______________________ ______________ _________________ "InputDataOffset" "0x00000000" "48.0 MB" "OutputResultOffset" "0x03000000" "4.0 MB" "SystemBufferOffset" "0x03400000" "60.0 MB" "InstructionDataOffset" "0x07000000" "8.0 MB" "ConvWeightDataOffset" "0x07800000" "8.0 MB" "FCWeightDataOffset" "0x08000000" "12.0 MB" "EndOffset" "0x08c00000" "Total: 140.0 MB" ### Programming FPGA Bitstream using JTAG... ### Programming the FPGA bitstream has been completed successfully. ### Loading weights to Conv Processor. ### Conv Weights loaded. Current time is 16-Jul-2020 12:45:10 ### Loading weights to FC Processor. ### FC Weights loaded. Current time is 16-Jul-2020 12:45:26 ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13570959 0.09047 30 380609145 11.8 conv_module 12667786 0.08445 conv_1 3938907 0.02626 maxpool_1 1544560 0.01030 conv_2 2910954 0.01941 maxpool_2 577524 0.00385 conv_3 2552707 0.01702 maxpool_3 676542 0.00451 conv_4 455434 0.00304 maxpool_4 11251 0.00008 fc_module 903173 0.00602 fc_1 536164 0.00357 fc_2 342643 0.00228 fc_3 24364 0.00016 * The clock frequency of the DL processor is: 150MHz ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13570364 0.09047 30 380612682 11.8 conv_module 12667103 0.08445 conv_1 3939296 0.02626 maxpool_1 1544371 0.01030 conv_2 2910747 0.01940 maxpool_2 577654 0.00385 conv_3 2551829 0.01701 maxpool_3 676548 0.00451 conv_4 455396 0.00304 maxpool_4 11355 0.00008 fc_module 903261 0.00602 fc_1 536206 0.00357 fc_2 342688 0.00228 fc_3 24365 0.00016 * The clock frequency of the DL processor is: 150MHz ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13571561 0.09048 30 380608338 11.8 conv_module 12668340 0.08446 conv_1 3939070 0.02626 maxpool_1 1545327 0.01030 conv_2 2911061 0.01941 maxpool_2 577557 0.00385 conv_3 2552082 0.01701 maxpool_3 676506 0.00451 conv_4 455582 0.00304 maxpool_4 11248 0.00007 fc_module 903221 0.00602 fc_1 536167 0.00357 fc_2 342643 0.00228 fc_3 24409 0.00016 * The clock frequency of the DL processor is: 150MHz ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13569862 0.09047 30 380613327 11.8 conv_module 12666756 0.08445 conv_1 3939212 0.02626 maxpool_1 1543267 0.01029 conv_2 2911184 0.01941 maxpool_2 577275 0.00385 conv_3 2552868 0.01702 maxpool_3 676438 0.00451 conv_4 455353 0.00304 maxpool_4 11252 0.00008 fc_module 903106 0.00602 fc_1 536050 0.00357 fc_2 342645 0.00228 fc_3 24409 0.00016 * The clock frequency of the DL processor is: 150MHz ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13570823 0.09047 30 380619836 11.8 conv_module 12667607 0.08445 conv_1 3939074 0.02626 maxpool_1 1544519 0.01030 conv_2 2910636 0.01940 maxpool_2 577769 0.00385 conv_3 2551800 0.01701 maxpool_3 676795 0.00451 conv_4 455859 0.00304 maxpool_4 11248 0.00007 fc_module 903216 0.00602 fc_1 536165 0.00357 fc_2 342643 0.00228 fc_3 24406 0.00016 * The clock frequency of the DL processor is: 150MHz offset_name offset_address allocated_space _______________________ ______________ _________________ "InputDataOffset" "0x00000000" "48.0 MB" "OutputResultOffset" "0x03000000" "4.0 MB" "SystemBufferOffset" "0x03400000" "60.0 MB" "InstructionDataOffset" "0x07000000" "8.0 MB" "ConvWeightDataOffset" "0x07800000" "8.0 MB" "FCWeightDataOffset" "0x08000000" "12.0 MB" "EndOffset" "0x08c00000" "Total: 140.0 MB" ### FPGA bitstream programming has been skipped as the same bitstream is already loaded on the target FPGA. ### Deep learning network programming has been skipped as the same network is already loaded on the target FPGA. ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13572329 0.09048 10 127265075 11.8 conv_module 12669135 0.08446 conv_1 3939559 0.02626 maxpool_1 1545378 0.01030 conv_2 2911243 0.01941 maxpool_2 577422 0.00385 conv_3 2552064 0.01701 maxpool_3 676678 0.00451 conv_4 455657 0.00304 maxpool_4 11227 0.00007 fc_module 903194 0.00602 fc_1 536140 0.00357 fc_2 342688 0.00228 fc_3 24364 0.00016 * The clock frequency of the DL processor is: 150MHz ### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 13572527 0.09048 10 127266427 11.8 conv_module 12669266 0.08446 conv_1 3939776 0.02627 maxpool_1 1545632 0.01030 conv_2 2911169 0.01941 maxpool_2 577592 0.00385 conv_3 2551613 0.01701 maxpool_3 676811 0.00451 conv_4 455418 0.00304 maxpool_4 11348 0.00008 fc_module 903261 0.00602 fc_1 536205 0.00357 fc_2 342689 0.00228 fc_3 24365 0.00016 * The clock frequency of the DL processor is: 150MHz
The weights, biases, and activations of the convolution layers of the network
specified in the dlquantizer
object now use scaled 8-bit integer
data types.
Examine the MetricResults.Result
field of the validation output
to see the performance of the quantized network.
validateOut = prediction.MetricResults.Result
ans = NetworkImplementation MetricOutput _____________________ ____________ {'Floating-Point'} 0.9875 {'Quantized' } 0.9875
Examine the QuantizedNetworkFPS
field of the validation output
to see the frames per second performance of the quantized network.
prediction.QuantizedNetworkFPS
ans = 11.8126
Import a dlquantizer
Object into the Deep Network Quantizer App
This example shows you how to import a dlquantizer
object from the base workspace into the Deep Network Quantizer app. This allows you to begin quantization of a deep neural network using the command line or the app, and resume your work later in the app.
Open the Deep Network Quantizer app.
deepNetworkQuantizer
In the app, click New and select Import dlquantizer object
.
In the dialog, select the dlquantizer
object to import from the
base workspace. For this example, use quantObj
that you create in
the above example Quantize a Neural Network for GPU Target.
The app imports any data contained in the dlquantizer
object that
was collected at the command line. This data can include the network to quantize,
calibration data, validation data, and calibration statistics.
The app displays a table containing the calibration data contained in the imported
dlquantizer
object, quantObj
. To the right
of the table, the app displays histograms of the dynamic ranges of the parameters.
The gray regions of the histograms indicate data that cannot be represented by the
quantized representation. For more information on how to interpret these histograms,
see Quantization of Deep Neural Networks.
Emulate Target Agnostic Quantized Network
This example shows how to create a target agnostic, simulatable quantized deep neural network in MATLAB.
Target agnostic quantization allows you to see the effect quantization has on your neural network without target hardware or target-specific quantization schemes. Creating a target agnostic quantized network is useful if you:
Do not have access to your target hardware.
Want to preview whether or not your network is suitable for quantization.
Want to find layers that are sensitive to quantization.
Quantized networks emulate quantized behavior for quantization-compatible layers. Network architecture like layers and connections are the same as the original network, but inference behavior uses limited precision types. Once you have quantized your network, you can use the quantizationDetails function to retrieve details on what was quantized.
Load the pretrained network. net
is a SqueezeNet network that has been retrained using transfer learning to classify images in the MerchData
data set.
load squeezenetmerch
net
net = DAGNetwork with properties: Layers: [68×1 nnet.cnn.layer.Layer] Connections: [75×2 table] InputNames: {'data'} OutputNames: {'new_classoutput'}
You can use the quantizationDetails
function to see that the network is not quantized.
qDetailsOriginal = quantizationDetails(net)
qDetailsOriginal = struct with fields:
IsQuantized: 0
TargetLibrary: ""
QuantizedLayerNames: [0×0 string]
QuantizedLearnables: [0×3 table]
Unzip and load the MerchData
images as an image datastore.
unzip('MerchData.zip') imds = imageDatastore('MerchData', ... 'IncludeSubfolders',true, ... 'LabelSource','foldernames');
Define calibration and validation data to use for quantization. The output size of the images are changed for both calibration and validation data according to network requirements.
[calData,valData] = splitEachLabel(imds,0.7,'randomized');
augCalData = augmentedImageDatastore([227 227],calData);
augValData = augmentedImageDatastore([227 227],valData);
Create dlquantizer
object and specify the network to quantize. Set the execution environment to MATLAB. How the network is quantized depends on the execution environment. The MATLAB execution environment is agnostic to the target hardware and allows you to prototype quantized behavior.
quantObj = dlquantizer(net,'ExecutionEnvironment','MATLAB');
Use the calibrate
function to exercise the network with sample inputs and collect range information. The calibrate
function exercises the network and collects the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. The function returns a table. Each row of the table contains range information for a learnable parameter of the optimized network.
calResults = calibrate(quantObj,augCalData);
Use the quantize
method to quantize the network object and return a simulatable quantized network.
qNet = quantize(quantObj)
qNet = Quantized DAGNetwork with properties: Layers: [68×1 nnet.cnn.layer.Layer] Connections: [75×2 table] InputNames: {'data'} OutputNames: {'new_classoutput'} Use the quantizationDetails method to extract quantization details.
You can use the quantizationDetails
function to see that the network is now quantized.
qDetailsQuantized = quantizationDetails(qNet)
qDetailsQuantized = struct with fields:
IsQuantized: 1
TargetLibrary: "none"
QuantizedLayerNames: [26×1 string]
QuantizedLearnables: [52×3 table]
Make predictions using the original, single-precision floating-point network, and the quantized INT8 network.
predOriginal = classify(net,augValData); % Predictions for the non-quantized network predQuantized = classify(qNet,augValData); % Predictions for the quantized network
Compute the relative accuracy of the quantized network as compared to the original network.
ccrQuantized = mean(predQuantized == valData.Labels)*100
ccrQuantized = 100
ccrOriginal = mean(predOriginal == valData.Labels)*100
ccrOriginal = 100
For this validation data set, the quantized network gives the same predictions as the floating-point network.
Version History
Introduced in R2020aR2022b: dlnetwork
support
dlquantizer
now supports dlnetwork
objects for
quantization using the calibrate
and validate
functions.
R2022a: Validate the performance of quantized network for CPU target
You can now use the dlquantizer
object and the
validate
function to quantize a network and generate code for CPU
targets.
R2022a: Quantize neural networks without a specific target
Specify MATLAB
as the ExecutionEnvironment
to
quantize your neural networks without generating code or committing to a specific target for
code deployment. This can be useful if you:
Do not have access to your target hardware.
Want to inspect your quantized network without generating code.
Your quantized network implements int8
data instead of
single
data. It keeps the same layers and connections as the original
network, and it has the same inference behavior as it would when running on hardware.
Once you have quantized your network, you can use the
quantizationDetails
function to inspect your quantized network.
Additionally, you also have the option to deploy the code to a GPU target.
See Also
Apps
Functions
calibrate
|quantize
|validate
|dlquantizationOptions
|quantizationDetails
|estimateNetworkMetrics
Topics
- Quantization of Deep Neural Networks
- Quantize Residual Network Trained for Image Classification and Generate CUDA Code
- Quantize Layers in Object Detectors and Generate CUDA Code
- Deploy INT8 Network to FPGA (Deep Learning HDL Toolbox)
- Generate INT8 Code for Deep Learning Network on Raspberry Pi (MATLAB Coder)
- Parameter Pruning and Quantization of Image Classification Network
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)