How to export INT8 quantized weight of deep neural network?

8 views (last 30 days)
I trained neural network using Deep Learning Toolbox, and quantized it.
Below code is what I used to INT8 quantize network model.
% Create a dlquantizer object for quantization
quantObj = dlquantizer(net);
% quantOpts = dlquantizationOptions(target='host');
calibrate(quantObj,imdsTrain);
% valResults = validate(quantObj, imdsValidation, quantOpts);
% valResults.Statistics
% Perform quantization
quantObj = quantize(quantObj);
qDetailsQuantized = quantizationDetails(quantObj)
% Save the quantized network
save('quantizedNet.mat', 'quantObj');
exportONNXNetwork(quantObj,'quantizedNet.onnx')
After quantization, I got quantized network quantObj .
However, I cannot access weight and bias which coverted to INT8 format.
When I display quantized networks' weight and bias using bwloe code,
>> disp(quantObj.Layers(2).Bias(:,:,1))
-6.9011793e-12
It still shows float type value.
Even I tried to export network as ONNX, MATLAB shows below warning,
>> exportONNXNetwork(quantObj,'quantizedNet.onnx')
Warning: Exported weights are not quantized when exporting quantized networks.
How can I access INT8 quantized weight and bias value?

Accepted Answer

Angelo Yeo
Angelo Yeo on 30 May 2024
Use the quantizationDetails function to extract quantization details.
You should inspect your qDetailsQuantized which was extracted with quantizationDetails. Would you look up the qDetailsQuantized.QuantizedLearnables?
The following example can be helpful for you.
  3 Comments
Jisu Kwon
Jisu Kwon on 30 May 2024
I found it, qDetailsQuantized.QuantizedLearnables was what I want...
It was already obviously shown in member of table.
>> qDetailsQuantized.QuantizedLearnables
ans =
8×3 table
Layer Parameter Value
________ _________ _________________
"conv_1" "Weights" {3×3×1×60 int8 }
"conv_1" "Bias" {1×1×60 int32}
"conv_2" "Weights" {3×3×60×60 int8 }
"conv_2" "Bias" {1×1×60 int32}
"conv_3" "Weights" {3×3×60×56 int8 }
"conv_3" "Bias" {1×1×56 int32}
"conv_4" "Weights" {3×3×56×12 int8 }
"conv_4" "Bias" {1×1×12 int32}
I can access value like this.
>> conv_1_weight = qDetailsQuantized.QuantizedLearnables.Value(1)
conv_1_weight =
1×1 cell array
{3×3×1×60 int8}
>> conv_1_weight{:,:,:,1}
3×3×1×60 int8 array
ans(:,:,1,1) =
18 -16 -50
-6 -54 -10
-37 -49 -18
Thanks again for your response!
Angelo Yeo
Angelo Yeo on 30 May 2024
Yes, exactly. Thanks for the feedback. It's great to know it worked for you.

Sign in to comment.

More Answers (0)

Categories

Find more on Quantization, Projection, and Pruning in Help Center and File Exchange

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!