Create pixel classification layer using generalized Dice loss for semantic segmentation
A Dice pixel classification layer provides a categorical label for each image pixel or voxel using generalized Dice loss.
The layer uses generalized Dice loss to alleviate the problem of class imbalance in semantic segmentation problems. Generalized Dice loss controls the contribution that each class makes to the loss by weighting classes by the inverse size of the expected region.
a Dice pixel classification output layer for semantic image segmentation networks. The
layer outputs the categorical label for each image pixel or voxel processed by a CNN. The
layer automatically ignores undefined pixel labels during training.
layer = dicePixelClassificationLayer
returns a Dice pixel classification output layer using Name,Value pair arguments to set
layer = dicePixelClassificationLayer(Name,Value)
Name properties. You can
specify multiple name-value pairs. Enclose each property name in quotes.
creates a Dice pixel classification layer with the name
Classes— Classes of the output layer
'auto'(default) | categorical vector | string array | cell array of character vectors
Classes of the output layer, specified as a categorical vector,
string array, cell array of character vectors, or
'auto', then the software automatically
sets the classes at training time. If you specify the string array or cell array of character
str, then the software sets the classes of the output layer to
categorical(str,str). The default value is
OutputSize— Output size
This property is read-only.
The output size of the layer. The value is
'auto' prior to
training, and is specified as a numeric value at training time.
LossFunction— Loss function
This property is read-only.
Loss function used for training, specified as
NumInputs— Number of inputs
Number of inputs of the layer. This layer accepts a single input only.
InputNames— Input names
Input names of the layer. This layer accepts a single input only.
Predict the categorical label of every pixel in an input image using a generalized Dice loss function.
layers = [ imageInputLayer([480 640 3]) convolution2dLayer(3,16,'Stride',2,'Padding',1) reluLayer transposedConv2dLayer(2,4,'Stride',2) softmaxLayer dicePixelClassificationLayer ]
layers = 6x1 Layer array with layers: 1 '' Image Input 480x640x3 images with 'zerocenter' normalization 2 '' Convolution 16 3x3 convolutions with stride [2 2] and padding [1 1 1 1] 3 '' ReLU ReLU 4 '' Transposed Convolution 4 2x2 transposed convolutions with stride [2 2] and cropping [0 0 0 0] 5 '' Softmax softmax 6 '' Dice Pixel Classification Layer Generalized Dice loss
The Dice loss function is based on the Sørensen-Dice similarity coefficient for measuring overlap between two segmented images.
The generalized Dice loss function L used by
dicePixelClassificationLayer for the loss between one image
Y and the corresponding ground truth T is given
K is the number of classes, M is the number of elements along the first two dimensions of Y, and wk is a class specific weighting factor that controls the contribution each class makes to the loss. This weighting helps counter the influence of larger regions on the Dice score, making it easier for the network to learn how to segment smaller regions. wk is typically the inverse area of the expected region:
There are several variations of generalized Dice Loss functions , . The function used in
dicePixelClassificationLayer has squared terms to ensure that the
derivative is 0 when the prediction matches the ground truth .
 Crum, William R., Oscar Camara, and Derek LG Hill. "Generalized overlap measures for evaluation and validation in medical image analysis." IEEE Transactions on Medical Imaging. 25.11, 2006, pp. 1451–1461.
 Sudre, Carole H., et al. "Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations." Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, Cham, 2017, pp. 240–248.
 Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation". Fourth International Conference on 3D Vision (3DV). Stanford, CA, 2016: pp. 565–571.
To generate CUDA® or C++ code by using GPU Coder™, you must first construct and train a deep neural network. Once the network is trained and evaluated, you can configure the code generator to generate code and deploy the convolutional neural network on platforms that use NVIDIA® or ARM® GPU processors. For more information, see Deep Learning with GPU Coder (GPU Coder).
For this layer, you can generate code that takes advantage of the NVIDIA
CUDA deep neural network library (cuDNN), NVIDIA
TensorRT™ high performance inference library, or the ARM
Compute Library for Mali GPU.