# trainingOptions

Options for training deep learning neural network

## Description

returns training options for the optimizer specified by
`options`

= trainingOptions(`solverName`

)`solverName`

. To train a network, use the training
options as an input argument to the `trainNetwork`

function.

returns training options with additional options specified by one or more
name-value arguments.`options`

= trainingOptions(`solverName`

,`Name=Value`

)

## Examples

### Specify Training Options

Create a set of options for training a network using stochastic gradient descent with momentum. Reduce the learning rate by a factor of 0.2 every 5 epochs. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Turn on the training progress plot.

options = trainingOptions("sgdm", ... LearnRateSchedule="piecewise", ... LearnRateDropFactor=0.2, ... LearnRateDropPeriod=5, ... MaxEpochs=20, ... MiniBatchSize=64, ... Plots="training-progress")

options = TrainingOptionsSGDM with properties: Momentum: 0.9000 InitialLearnRate: 0.0100 LearnRateSchedule: 'piecewise' LearnRateDropFactor: 0.2000 LearnRateDropPeriod: 5 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf MaxEpochs: 20 MiniBatchSize: 64 Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf Shuffle: 'once' CheckpointPath: '' CheckpointFrequency: 1 CheckpointFrequencyUnit: 'epoch' ExecutionEnvironment: 'auto' WorkerLoad: [] OutputFcn: [] Plots: 'training-progress' SequenceLength: 'longest' SequencePaddingValue: 0 SequencePaddingDirection: 'right' DispatchInBackground: 0 ResetInputNormalization: 1 BatchNormalizationStatistics: 'population' OutputNetwork: 'last-iteration'

### Monitor Deep Learning Training Progress

This example shows how to monitor the training process of deep learning networks.

When you train networks for deep learning, it is often useful to monitor the training progress. By plotting various metrics during training, you can learn how the training is progressing. For example, you can determine if and how quickly the network accuracy is improving, and whether the network is starting to overfit the training data.

When you set the `Plots`

training option to `"training-progress"`

in `trainingOptions`

and start network training, `trainNetwork`

creates a figure and displays training metrics at every iteration. Each iteration is an estimation of the gradient and an update of the network parameters. If you specify validation data in `trainingOptions`

, then the figure shows validation metrics each time `trainNetwork`

validates the network. The figure plots the following:

**Training accuracy**— Classification accuracy on each individual mini-batch.**Smoothed training accuracy**— Smoothed training accuracy, obtained by applying a smoothing algorithm to the training accuracy. It is less noisy than the unsmoothed accuracy, making it easier to spot trends.**Validation accuracy**— Classification accuracy on the entire validation set (specified using`trainingOptions`

).**Training loss**,**smoothed training loss**, and**validation loss**—`classificationLayer`

, then the loss function is the cross entropy loss. For more information about loss functions for classification and regression problems, see Output Layers.

For regression networks, the figure plots the root mean square error (RMSE) instead of the accuracy.

The figure marks each training **Epoch** using a shaded background. An epoch is a full pass through the entire data set.

During training, you can stop training and return the current state of the network by clicking the stop button in the top-right corner. For example, you might want to stop training when the accuracy of the network reaches a plateau and it is clear that the accuracy is no longer improving. After you click the stop button, it can take a while for the training to complete. Once training is complete, `trainNetwork`

returns the trained network.

When training finishes, view the **Results** showing the finalized validation accuracy and the reason that training finished. If the `OutputNetwork`

training option is `"last-iteration"`

(default), the finalized metrics correspond to the last training iteration. If the `OutputNetwork`

training option is `"best-validation-loss"`

, the finalized metrics correspond to the iteration with the lowest validation loss. The iteration from which the final validation metrics are calculated is labeled **Final** in the plots.

If your network contains batch normalization layers, then the final validation metrics can be different to the validation metrics evaluated during training. This is because the mean and variance statistics used for batch normalization can be different after training completes. For example, if the `BatchNormalizationStatisics`

training option is `"population"`

, then after training, the software finalizes the batch normalization statistics by passing through the training data once more and uses the resulting mean and variance. If the `BatchNormalizationStatisics`

training option is `"moving"`

, then the software approximates the statistics during training using a running estimate and uses the latest values of the statistics.

On the right, view information about the training time and settings. To learn more about training options, see Set Up Parameters and Train Convolutional Neural Network.

To save the training progress plot, click **Export Training Plot** in the training window. You can save the plot as a PNG, JPEG, TIFF, or PDF file. You can also save the individual plots of loss, accuracy, and root mean squared error using the axes toolbar.

**Plot Training Progress During Training**

Train a network and plot the training progress during training.

Load the training data, which contains 5000 images of digits. Set aside 1000 of the images for network validation.

[XTrain,YTrain] = digitTrain4DArrayData; idx = randperm(size(XTrain,4),1000); XValidation = XTrain(:,:,:,idx); XTrain(:,:,:,idx) = []; YValidation = YTrain(idx); YTrain(idx) = [];

Construct a network to classify the digit image data.

layers = [ imageInputLayer([28 28 1]) convolution2dLayer(3,8,Padding="same") batchNormalizationLayer reluLayer maxPooling2dLayer(2,Stride=2) convolution2dLayer(3,16,Padding="same") batchNormalizationLayer reluLayer maxPooling2dLayer(2,Stride=2) convolution2dLayer(3,32,Padding="same") batchNormalizationLayer reluLayer fullyConnectedLayer(10) softmaxLayer classificationLayer];

Specify options for network training. To validate the network at regular intervals during training, specify validation data. Choose the `ValidationFrequency`

value so that the network is validated about once per epoch. To plot training progress during training, set the `Plots`

training option to `"training-progress"`

.

options = trainingOptions("sgdm", ... MaxEpochs=8, ... ValidationData={XValidation,YValidation}, ... ValidationFrequency=30, ... Verbose=false, ... Plots="training-progress");

Train the network.

net = trainNetwork(XTrain,YTrain,layers,options);

## Input Arguments

`solverName`

— Solver for training network

`'sgdm'`

| `'rmsprop'`

| `'adam'`

Solver for training network, specified as one of the following:

`'sgdm'`

— Use the stochastic gradient descent with momentum (SGDM) optimizer. You can specify the momentum value using the`Momentum`

training option.`'rmsprop'`

— Use the RMSProp optimizer. You can specify the decay rate of the squared gradient moving average using the`SquaredGradientDecayFactor`

training option.`'adam'`

— Use the Adam optimizer. You can specify the decay rates of the gradient and squared gradient moving averages using the`GradientDecayFactor`

and`SquaredGradientDecayFactor`

training options, respectively.

For more information about the different solvers, see Stochastic Gradient Descent.

### Name-Value Arguments

Specify optional pairs of arguments as
`Name1=Value1,...,NameN=ValueN`

, where `Name`

is
the argument name and `Value`

is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.

*
Before R2021a, use commas to separate each name and value, and enclose*
`Name`

*in quotes.*

**Example: **`InitialLearnRate=0.03,L2Regularization=0.0005,LearnRateSchedule="piecewise"`

specifies the initial learning rate as 0.03 and
the`L`

regularization
factor as 0.0005, and instructs the software to drop the learning rate every
given number of epochs by multiplying with a certain factor._{2}

**Plots and Display**

`Plots`

— Plots to display during network training

`'none'`

| `'training-progress'`

Plots to display during network training, specified as one of the following:

`'none'`

— Do not display plots during training.`'training-progress'`

— Plot training progress. The plot shows mini-batch loss and accuracy, validation loss and accuracy, and additional information on the training progress. The plot has a stop button in the top-right corner. Click the button to stop training and return the current state of the network. You can save the training plot as an image or PDF by clicking**Export Training Plot**. For more information on the training progress plot, see Monitor Deep Learning Training Progress.

`Verbose`

— Indicator to display training progress information

`1`

(true) (default) | `0`

(false)

Indicator to display training progress information in the command window, specified as
`1`

(true) or `0`

(false).

The verbose output displays the following information:

**Classification Networks**

Field | Description |
---|---|

`Epoch` | Epoch number. An epoch corresponds to a full pass of the data. |

`Iteration` | Iteration number. An iteration corresponds to a mini-batch. |

`Time Elapsed` | Time elapsed in hours, minutes, and seconds. |

`Mini-batch Accuracy` | Classification accuracy on the mini-batch. |

`Validation Accuracy` | Classification accuracy on the validation data. If you do not specify validation data, then the function does not display this field. |

`Mini-batch Loss` | Loss on the mini-batch. If the output layer is a
`ClassificationOutputLayer` object, then the loss
is the cross entropy loss for multi-class classification problems
with mutually exclusive classes. |

`Validation Loss` | Loss on the validation data. If the output layer is a
`ClassificationOutputLayer` object, then the loss
is the cross entropy loss for multi-class classification problems
with mutually exclusive classes. If you do not specify validation
data, then the function does not display this field. |

`Base Learning Rate` | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |

**Regression Networks**

Field | Description |
---|---|

`Epoch` | Epoch number. An epoch corresponds to a full pass of the data. |

`Iteration` | Iteration number. An iteration corresponds to a mini-batch. |

`Time Elapsed` | Time elapsed in hours, minutes, and seconds. |

`Mini-batch RMSE` | Root-mean-squared-error (RMSE) on the mini-batch. |

`Validation RMSE` | RMSE on the validation data. If you do not specify validation data, then the software does not display this field. |

`Mini-batch Loss` | Loss on the mini-batch. If the output layer is a
`RegressionOutputLayer` object, then the loss is
the half-mean-squared-error. |

`Validation Loss` | Loss on the validation data. If the output layer is a
`RegressionOutputLayer` object, then the loss is
the half-mean-squared-error. If you do not specify validation data,
then the software does not display this field. |

`Base Learning Rate` | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |

When training stops, the verbose output displays the reason for stopping.

To specify validation data, use the `ValidationData`

training option.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

| `logical`

`VerboseFrequency`

— Frequency of verbose printing

`50`

(default) | positive integer

Frequency of verbose printing, which is the number of iterations between printing to
the command window, specified as a positive integer. This option only has an effect when
the `Verbose`

training option is `1`

(true).

If you validate the network during training, then `trainNetwork`

also prints to the command window every time validation occurs.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

**Mini-Batch Options**

`MaxEpochs`

— Maximum number of epochs

`30`

(default) | positive integer

Maximum number of epochs to use for training, specified as a positive integer.

An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`MiniBatchSize`

— Size of mini-batch

`128`

(default) | positive integer

Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.

If the mini-batch size does not evenly divide the number of training samples, then
`trainNetwork`

discards the training data that does not fit into
the final complete mini-batch of each epoch.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`Shuffle`

— Option for data shuffling

`'once'`

| `'never'`

| `'every-epoch'`

Option for data shuffling, specified as one of the following:

`'once'`

— Shuffle the training and validation data once before training.`'never'`

— Do not shuffle the data.`'every-epoch'`

— Shuffle the training data before each training epoch, and shuffle the validation data before each network validation. If the mini-batch size does not evenly divide the number of training samples, then`trainNetwork`

discards the training data that does not fit into the final complete mini-batch of each epoch. To avoid discarding the same data every epoch, set the`Shuffle`

training option to`'every-epoch'`

.

**Validation**

`ValidationData`

— Data to use for validation during training

`[]`

(default) | datastore | table | cell array

Data to use for validation during training, specified as `[]`

, a
datastore, a table, or a cell array containing the validation predictors and
responses.

You can specify validation predictors and responses using the same formats supported
by the `trainNetwork`

function. You can specify the
validation data as a datastore, table, or the cell array
`{predictors,responses}`

, where `predictors`

contains the validation predictors and `responses`

contains the
validation responses.

For more information, see the `images`

,
`sequences`

,
and `features`

input arguments of the `trainNetwork`

function.

During training, `trainNetwork`

calculates the validation accuracy
and validation loss on the validation data. To specify the validation frequency, use the
`ValidationFrequency`

training option. You can also use the validation
data to stop training automatically when the validation loss stops decreasing. To turn
on automatic validation stopping, use the `ValidationPatience`

training option.

If your network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.

The validation data is shuffled according to the `Shuffle`

training option. If
`Shuffle`

is `'every-epoch'`

, then the
validation data is shuffled before each network validation.

If `ValidationData`

is `[]`

, then the software does
not validate the network during training.

`ValidationFrequency`

— Frequency of network validation

`50`

(default) | positive integer

Frequency of network validation in number of iterations, specified as a positive integer.

The `ValidationFrequency`

value is the number of iterations between
evaluations of validation metrics. To specify validation data, use the `ValidationData`

training option.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`ValidationPatience`

— Patience of validation stopping

`Inf`

(default) | positive integer

Patience of validation stopping of network training, specified as a positive integer
or `Inf`

.

`ValidationPatience`

specifies the number of times that the loss on
the validation set can be larger than or equal to the previously smallest loss before
network training stops. If `ValidationPatience`

is
`Inf`

, then the values of the validation loss do not cause training
to stop early.

The returned network depends on the `OutputNetwork`

training option. To return the network with the lowest
validation loss, set the `OutputNetwork`

training option to
`"best-validation-loss"`

.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`OutputNetwork`

— Network to return when training completes

`'last-iteration'`

(default) | `'best-validation-loss'`

Network to return when training completes, specified as one of the following:

`'last-iteration'`

– Return the network corresponding to the last training iteration.`'best-validation-loss'`

– Return the network corresponding to the training iteration with the lowest validation loss. To use this option, you must specify the`ValidationData`

training option.

**Solver Options**

`InitialLearnRate`

— Initial learning rate

positive scalar

Initial learning rate used for training, specified as a positive scalar.

The default value is `0.01`

for the
`'sgdm'`

solver and
`0.001`

for the
`'rmsprop'`

and
`'adam'`

solvers.

If the learning rate is too low, then training can take a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`LearnRateSchedule`

— Option for dropping learning rate during training

`'none'`

(default) | `'piecewise'`

Option for dropping the learning rate during training, specified as of the following:

`'none'`

— The learning rate remains constant throughout training.`'piecewise'`

— The software updates the learning rate every certain number of epochs by multiplying with a certain factor. Use the`LearnRateDropFactor`

training option to specify the value of this factor. Use the`LearnRateDropPeriod`

training option to specify the number of epochs between multiplications.

`LearnRateDropPeriod`

— Number of epochs for dropping the learning rate

`10`

(default) | positive integer

Number of epochs for dropping the learning rate, specified
as a positive integer. This option is valid only when the
`LearnRateSchedule`

training
option is `'piecewise'`

.

The software multiplies the global learning rate with the
drop factor every time the specified number of epochs
passes. Specify the drop factor using the
`LearnRateDropFactor`

training
option.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`LearnRateDropFactor`

— Factor for dropping the learning rate

`0.1`

(default) | scalar from `0`

to
`1`

Factor for dropping the learning rate, specified as a
scalar from `0`

to `1`

.
This option is valid only when the
`LearnRateSchedule`

training
option is `'piecewise'`

.

`LearnRateDropFactor`

is a
multiplicative factor to apply to the learning rate every
time a certain number of epochs passes. Specify the number
of epochs using the
`LearnRateDropPeriod`

training
option.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`L2Regularization`

— Factor for L_{2} regularization

`0.0001`

(default) | nonnegative scalar

Factor for L_{2} regularization (weight decay), specified as a
nonnegative scalar. For more information, see L2 Regularization.

You can specify a multiplier for the L_{2} regularization for
network layers with learnable parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`Momentum`

— Contribution of previous step

`0.9`

(default) | scalar from `0`

to
`1`

Contribution of the parameter update step of the previous iteration to the current iteration of stochastic gradient descent with momentum, specified as a scalar from `0`

to `1`

.

A value of `0`

means no contribution from the previous step, whereas a value of `1`

means maximal contribution from the previous step. The default value works well for most tasks.

To specify the `Momentum`

training
option, `solverName`

must be
`'sgdm'`

.

For more information, see Stochastic Gradient Descent with Momentum.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`GradientDecayFactor`

— Decay rate of gradient moving average

`0.9`

(default) | nonnegative scalar less than `1`

Decay rate of gradient moving average for the Adam solver, specified as a nonnegative scalar less than `1`

. The gradient decay rate is denoted by `β`

in the Adam section._{1}

To specify the `GradientDecayFactor`

training option, `solverName`

must be
`'adam'`

.

The default value works well for most tasks.

For more information, see Adam.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`SquaredGradientDecayFactor`

— Decay rate of squared gradient moving average

nonnegative scalar less than `1`

Decay rate of squared gradient moving average for the Adam
and RMSProp solvers, specified as a nonnegative scalar
less than `1`

. The squared gradient decay
rate is denoted by
`β`

in
[4]._{2}

To specify the
`SquaredGradientDecayFactor`

training option, `solverName`

must be
`'adam'`

or
`'rmsprop'`

.

Typical values of the decay rate are `0.9`

, `0.99`

, and `0.999`

, corresponding to averaging lengths of `10`

, `100`

, and `1000`

parameter updates, respectively.

The default value is `0.999`

for the Adam
solver. The default value is `0.9`

for
the RMSProp solver.

For more information, see Adam and RMSProp.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`Epsilon`

— Denominator offset

`1e-8`

(default) | positive scalar

Denominator offset for Adam and RMSProp solvers, specified as a positive scalar.

The solver adds the offset to the denominator in the network parameter updates to avoid division by zero. The default value works well for most tasks.

To specify the `Epsilon`

training option,
`solverName`

must be
`'adam'`

or
`'rmsprop'`

.

For more information, see Adam and RMSProp.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`ResetInputNormalization`

— Option to reset input layer normalization

`1`

(true) (default) | `0`

(false)

Option to reset input layer normalization, specified as one of the following:

`1`

(true) — Reset the input layer normalization statistics and recalculate them at training time.`0`

(false) — Calculate normalization statistics at training time when they are empty.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

| `logical`

`BatchNormalizationStatistics`

— Mode to evaluate statistics in batch normalization layers

`'population'`

(default) | `'moving'`

Mode to evaluate the statistics in batch normalization layers, specified as one of the following:

`'population'`

– Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance.`'moving'`

– Approximate the statistics during training using a running estimate given by update steps$$\begin{array}{l}{\mu}^{*}={\lambda}_{\mu}\widehat{\mu}+(1-{\lambda}_{\mu})\mu \\ {\sigma}^{2}{}^{*}={\lambda}_{{\sigma}^{2}}\widehat{{\sigma}^{2}}\text{}\text{+}\text{}\text{(1-}{\lambda}_{{\sigma}^{2}})\text{}{\sigma}^{2}\end{array}$$

where $${\mu}^{*}$$ and $${\sigma}^{2}{}^{*}$$ denote the updated mean and variance, respectively, $${\lambda}_{\mu}$$ and $${\lambda}_{{\sigma}^{2}}$$ denote the mean and variance decay values, respectively, $$\widehat{\mu}$$ and $$\widehat{{\sigma}^{2}}$$ denote the mean and variance of the layer input, respectively, and $$\mu $$ and $${\sigma}^{2}$$ denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.

**Gradient Clipping**

`GradientThreshold`

— Gradient threshold

`Inf`

(default) | positive scalar

Gradient threshold, specified as `Inf`

or a positive scalar. If the
gradient exceeds the value of `GradientThreshold`

, then the gradient
is clipped according to the `GradientThresholdMethod`

training
option.

For more information, see Gradient Clipping.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`GradientThresholdMethod`

— Gradient threshold method

`'l2norm'`

(default) | `'global-l2norm'`

| `'absolute-value'`

Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:

`'l2norm'`

— If the L_{2}norm of the gradient of a learnable parameter is larger than`GradientThreshold`

, then scale the gradient so that the L_{2}norm equals`GradientThreshold`

.`'global-l2norm'`

— If the global L_{2}norm,*L*, is larger than`GradientThreshold`

, then scale all gradients by a factor of`GradientThreshold/`

*L*. The global L_{2}norm considers all learnable parameters.`'absolute-value'`

— If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger than`GradientThreshold`

, then scale the partial derivative to have magnitude equal to`GradientThreshold`

and retain the sign of the partial derivative.

For more information, see Gradient Clipping.

**Sequence Options**

`SequenceLength`

— Option to pad or truncate sequences

`"longest"`

(default) | `"shortest"`

| positive integer

Option to pad, truncate, or split input sequences, specified as one of the following:

`"longest"`

— Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the network.`"shortest"`

— Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.Positive integer — For each mini-batch, pad the sequences to the nearest multiple of the specified length that is greater than the longest sequence length in the mini-batch, and then split the sequences into smaller sequences of the specified length. If splitting occurs, then the software creates extra mini-batches. Use this option if the full sequences do not fit in memory. Alternatively, try reducing the number of sequences per mini-batch by setting the

`MiniBatchSize`

option to a lower value.

To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

| `char`

| `string`

`SequencePaddingDirection`

— Direction of padding or truncation

`"right"`

(default) | `"left"`

Direction of padding or truncation, specified as one of the following:

`"right"`

— Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences.`"left"`

— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.

Because recurrent layers process sequence data one time step at a time, when the recurrent
layer `OutputMode`

property is `'last'`

, any padding in
the final time steps can negatively influence the layer output. To pad or truncate sequence
data on the left, set the `SequencePaddingDirection`

option to `"left"`

.

For sequence-to-sequence networks (when the `OutputMode`

property is
`'sequence'`

for each recurrent layer), any padding in the first time
steps can negatively influence the predictions for the earlier time steps. To pad or
truncate sequence data on the right, set the `SequencePaddingDirection`

option to `"right"`

.

To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.

`SequencePaddingValue`

— Value to pad sequences

`0`

(default) | scalar

Value by which to pad input sequences, specified as a scalar.

The option is valid only when `SequenceLength`

is
`"longest"`

or a positive integer. Do not pad
sequences with `NaN`

, because doing so can propagate errors
throughout the network.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

**Hardware Options**

`ExecutionEnvironment`

— Hardware resource for training network

`'auto'`

| `'cpu'`

| `'gpu'`

| `'multi-gpu'`

| `'parallel'`

Hardware resource for training network, specified as one of the following:

`'auto'`

— Use a GPU if one is available. Otherwise, use the CPU.`'cpu'`

— Use the CPU.`'gpu'`

— Use the GPU.`'multi-gpu'`

— Use multiple GPUs on one machine, using a local parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts a parallel pool with pool size equal to the number of available GPUs.`'parallel'`

— Use a local or remote parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts one using the default cluster profile. If the pool has access to GPUs, then only workers with a unique GPU perform training computation. If the pool does not have GPUs, then training takes place on all available CPU workers instead.

For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.

`'gpu'`

, `'multi-gpu'`

, and
`'parallel'`

options require Parallel Computing Toolbox™. To use a GPU for
deep learning, you must also have a supported GPU device. For
information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns
an error.

To see an improvement in performance when training in parallel, try scaling up
the `MiniBatchSize`

and
`InitialLearnRate`

training options by the number
of GPUs.

The `'multi-gpu'`

and `'parallel'`

options do
not support networks containing custom layers with state parameters or
built-in layers that are stateful at training time. For example:

recurrent layers such as

`LSTMLayer`

,`BiLSTMLayer`

, or`GRULayer`

objects when the`SequenceLength`

training option is a positive integer`BatchNormalizationLayer`

objects when the`BatchNormalizationStatistics`

training option is set to`'moving'`

`WorkerLoad`

— Parallel worker load division

scalar from `0`

to `1`

| positive integer | numeric vector

Parallel worker load division between GPUs or CPUs, specified as one of the following:

Scalar from

`0`

to`1`

— Fraction of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.Positive integer — Number of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.

Numeric vector — Network training load for each worker in the parallel pool. For a vector

`W`

, worker`i`

gets a fraction`W(i)/sum(W)`

of the work (number of examples per mini-batch). If you train a network using data in a mini-batch datastore with background dispatch enabled, then you can assign a worker load of 0 to use that worker for fetching data in the background. The specified vector must contain one value per worker in the parallel pool.

If the parallel pool has access to GPUs, then workers without a unique GPU are never used for training computation. The default for pools with GPUs is to use all workers with a unique GPU for training computation, and the remaining workers for background dispatch. If the pool does not have access to GPUs and CPUs are used for training, then the default is to use one worker per machine for background data dispatch.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`DispatchInBackground`

— Flag to enable background dispatch

`0`

(false) (default) | `1`

(true)

Flag to enable background dispatch (asynchronous prefetch queuing) to read training data from datastores, specified as `0`

(false) or `1`

(true). Background dispatch requires Parallel Computing Toolbox.

`DispatchInBackground`

is only supported for datastores that are partitionable. For more information, see Use Datastore for Parallel Training and Background Dispatching.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

**Checkpoints**

`CheckpointPath`

— Path for saving checkpoint networks

`""`

(default) | character vector

Path for saving the checkpoint networks, specified as a character vector or string scalar.

If you do not specify a path (that is, you use the default

`""`

), then the software does not save any checkpoint networks.If you specify a path, then

`trainNetwork`

saves checkpoint networks to this path and assigns a unique name to each network. You can then load any checkpoint network and resume training from that network.If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint networks. If the path you specify does not exist, then

`trainingOptions`

returns an error.

The `CheckpointFrequency`

and
`CheckpointFrequencyUnit`

options specify the frequency of saving
checkpoint networks.

For more information about saving network checkpoints, see Save Checkpoint Networks and Resume Training.

**Data Types: **`char`

| `string`

`CheckpointFrequency`

— Frequency of saving checkpoint networks

`1`

(default) | positive integer

Frequency of saving checkpoint networks, specified as a positive integer.

If `CheckpointFrequencyUnit`

is `'epoch'`

, then the software saves checkpoint networks every `CheckpointFrequency`

epochs.

If `CheckpointFrequencyUnit`

is `'iteration'`

, then the software saves checkpoint networks every `CheckpointFrequency`

epochs.

This option only has an effect when `CheckpointPath`

is
nonempty.

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

`CheckpointFrequencyUnit`

— Checkpoint frequency unit

`'epoch'`

(default) | `'iteration'`

Checkpoint frequency unit, specified as `'epoch'`

or
`'iteration'`

.

If `CheckpointFrequencyUnit`

is `'epoch'`

, then the software saves checkpoint networks every `CheckpointFrequency`

epochs.

If `CheckpointFrequencyUnit`

is `'iteration'`

, then the software saves checkpoint networks every `CheckpointFrequency`

epochs.

This option only has an effect when `CheckpointPath`

is
nonempty.

`OutputFcn`

— Output functions

function handle | cell array of function handles

Output functions to call during training, specified as a function handle or cell array of function handles. `trainNetwork`

calls the specified functions once before the start of training, after each iteration, and once after training has finished. `trainNetwork`

passes a structure containing information in the following fields:

Field | Description |
---|---|

`Epoch` | Current epoch number |

`Iteration` | Current iteration number |

`TimeSinceStart` | Time in seconds since the start of training |

`TrainingLoss` | Current mini-batch loss |

`ValidationLoss` | Loss on the validation data |

`BaseLearnRate` | Current base learning rate |

`TrainingAccuracy` | Accuracy on the current mini-batch (classification networks) |

`TrainingRMSE` | RMSE on the current mini-batch (regression networks) |

`ValidationAccuracy` | Accuracy on the validation data (classification networks) |

`ValidationRMSE` | RMSE on the validation data (regression networks) |

`State` | Current training state, with a possible value of `"start"` , `"iteration"` , or `"done"` . |

If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.

You can use output functions to display or plot progress information, or to stop training. To
stop training early, make your output function return `1`

(true). If
any output function returns `1`

(true), then training finishes and
`trainNetwork`

returns the latest network. For an example showing
how to use output functions, see Customize Output During Deep Learning Network Training.

**Data Types: **`function_handle`

| `cell`

## Output Arguments

`options`

— Training options

`TrainingOptionsSGDM`

| `TrainingOptionsRMSProp`

| `TrainingOptionsADAM`

Training options, returned as a `TrainingOptionsSGDM`

, `TrainingOptionsRMSProp`

, or `TrainingOptionsADAM`

object. To train a neural
network, use the training options as an input argument to the
`trainNetwork`

function.

If `solverName`

is `'sgdm'`

,
`'rmsprop'`

, or
`'adam'`

, then the training options are
returned as a `TrainingOptionsSGDM`

,
`TrainingOptionsRMSProp`

, or
`TrainingOptionsADAM`

object, respectively.

You can edit training option properties of
`TrainingOptionsSGDM`

,
`TrainingOptionsADAM`

, and
`TrainingOptionsRMSProp`

objects directly.
For example, to change the mini-batch size after using the
`trainingOptions`

function, you can
edit the `MiniBatchSize`

property directly:

options = trainingOptions('sgdm'); options.MiniBatchSize = 64;

## Tips

For most deep learning tasks, you can use a pretrained network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train networks from scratch using

`layerGraph`

objects with the`trainNetwork`

and`trainingOptions`

functions.If the

`trainingOptions`

function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.

## Algorithms

### Initial Weights and Biases

For convolutional and fully connected layers, the initialization for the weights and biases
are given by the `WeightsInitializer`

and
`BiasInitializer`

properties of the layers,
respectively. For examples showing how to change the initialization for the
weights and biases, see Specify Initial Weights and Biases in Convolutional Layer and
Specify Initial Weights and Biases in Fully Connected Layer.

### Stochastic Gradient Descent

The standard gradient descent algorithm updates the network parameters (weights and biases) to minimize the loss function by taking small steps at each iteration in the direction of the negative gradient of the loss,

$${\theta}_{\ell +1}={\theta}_{\ell}-\alpha \nabla E\left({\theta}_{\ell}\right),$$

where $$\ell $$is the iteration number, $$\alpha >0$$ is the learning rate, $$\theta $$ is the parameter vector, and $$E\left(\theta \right)$$ is the loss function. In the standard gradient descent algorithm, the gradient of the loss function, $$\nabla E\left(\theta \right)$$, is evaluated using the entire training set, and the standard gradient descent algorithm uses the entire data set at once.

By contrast, at each iteration the *stochastic* gradient
descent algorithm evaluates the gradient and updates the parameters using a
subset of the training data. A different subset, called a mini-batch, is
used at each iteration. The full pass of the training algorithm over the
entire training set using mini-batches is one *epoch*.
Stochastic gradient descent is stochastic because the parameter updates
computed using a mini-batch is a noisy estimate of the parameter update that
would result from using the full data set. You can specify the mini-batch
size and the maximum number of epochs by using the `MiniBatchSize`

and `MaxEpochs`

training
options, respectively.

### Stochastic Gradient Descent with Momentum

The stochastic gradient descent algorithm can oscillate along the path of steepest descent towards the optimum. Adding a momentum term to the parameter update is one way to reduce this oscillation [2]. The stochastic gradient descent with momentum (SGDM) update is

$${\theta}_{\ell +1}={\theta}_{\ell}-\alpha \nabla E\left({\theta}_{\ell}\right)+\gamma \left({\theta}_{\ell}-{\theta}_{\ell -1}\right),$$

where $$\gamma $$ determines the contribution of the previous gradient step to the current
iteration. You can specify this value using the `Momentum`

training option. To train a neural network using the stochastic
gradient descent with momentum algorithm, specify `'sgdm'`

as the first
input argument to `trainingOptions`

. To specify the initial value of the
learning rate α, use the `InitialLearnRate`

training
option. You can also specify different learning rates for different layers and parameters.
For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.

### RMSProp

Stochastic gradient descent with momentum uses a single learning rate for all the parameters. Other optimization algorithms seek to improve network training by using learning rates that differ by parameter and can automatically adapt to the loss function being optimized. RMSProp (root mean square propagation) is one such algorithm. It keeps a moving average of the element-wise squares of the parameter gradients,

$${v}_{\ell}={\beta}_{2}{v}_{\ell -1}+(1-{\beta}_{2}){[\nabla E\left({\theta}_{\ell}\right)]}^{2}$$

*β _{2}* is the decay rate of the
moving average. Common values of the decay rate are 0.9, 0.99, and 0.999. The corresponding
averaging lengths of the squared gradients equal

*1/(1-β*, that is, 10, 100, and 1000 parameter updates, respectively. You can specify

_{2})*β*by using the

_{2}`SquaredGradientDecayFactor`

training options. The RMSProp algorithm uses this
moving average to normalize the updates of each parameter individually,$${\theta}_{\ell +1}={\theta}_{\ell}-\frac{\alpha \nabla E\left({\theta}_{\ell}\right)}{\sqrt{{v}_{\ell}}+\u03f5}$$

where the division is performed element-wise. Using RMSProp effectively
decreases the learning rates of parameters with large gradients and increases the learning
rates of parameters with small gradients. *ɛ* is a small constant added to
avoid division by zero. You can specify *ɛ* by using the `Epsilon`

training option, but the default value usually works well. To use RMSProp to train a neural
network, specify `'rmsprop'`

as the first input to
`trainingOptions`

.

### Adam

Adam (derived from *adaptive moment estimation*) [4] uses a parameter update that is
similar to RMSProp, but with an added momentum term. It keeps an element-wise moving average
of both the parameter gradients and their squared values,

$${m}_{\ell}={\beta}_{1}{m}_{\ell -1}+(1-{\beta}_{1})\nabla E\left({\theta}_{\ell}\right)$$

$${v}_{\ell}={\beta}_{2}{v}_{\ell -1}+(1-{\beta}_{2}){[\nabla E\left({\theta}_{\ell}\right)]}^{2}$$

You can specify the *β _{1}* and

*β*decay rates using the

_{2}`GradientDecayFactor`

and `SquaredGradientDecayFactor`

training options, respectively. Adam uses the
moving averages to update the network parameters as$${\theta}_{\ell +1}={\theta}_{\ell}-\frac{\alpha {m}_{l}}{\sqrt{{v}_{l}}+\u03f5}$$

If gradients over many iterations are similar, then using a moving
average of the gradient enables the parameter updates to pick up momentum in a certain
direction. If the gradients contain mostly noise, then the moving average of the gradient
becomes smaller, and so the parameter updates become smaller too. You can specify
*ɛ* by using the `Epsilon`

training option. The default value usually works well, but for certain problems a value as
large as 1 works better. To use Adam to train a neural network, specify
`'adam'`

as the first input to `trainingOptions`

.
The full Adam update also includes a mechanism to correct a bias the appears in the
beginning of training. For more information, see [4].

Specify the learning rate *α* for all optimization algorithms using the`InitialLearnRate`

training option. The effect of the learning rate is different for the different optimization algorithms, so the optimal learning rates are also different in general. You can also specify learning rates that differ by layers and by parameter. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.

### Gradient Clipping

If the gradients increase in magnitude exponentially, then the training is unstable and can diverge within a few iterations. This "gradient explosion" is indicated by a training loss that goes to `NaN`

or `Inf`

. Gradient clipping helps prevent gradient explosion by stabilizing the training at higher learning rates and in the presence of outliers [3]. Gradient clipping enables networks to be trained faster, and does not usually impact the accuracy of the learned task.

There are two types of gradient clipping.

Norm-based gradient clipping rescales the gradient based on a threshold, and does not change the direction of the gradient. The

`'l2norm'`

and`'global-l2norm'`

values of`GradientThresholdMethod`

are norm-based gradient clipping methods.Value-based gradient clipping clips any partial derivative greater than the threshold, which can result in the gradient arbitrarily changing direction. Value-based gradient clipping can have unpredictable behavior, but sufficiently small changes do not cause the network to diverge. The

`'absolute-value'`

value of`GradientThresholdMethod`

is a value-based gradient clipping method.

### L_{2} Regularization

Adding a regularization term for the weights to the loss function $$E\left(\theta \right)$$ is one way to reduce overfitting [1], [2]. The regularization term is also called *weight decay*. The loss function with the regularization term takes the form

$${E}_{R}\left(\theta \right)=E\left(\theta \right)+\lambda \Omega \left(w\right),$$

where $$w$$ is the weight vector, $$\lambda $$ is the regularization factor (coefficient), and the regularization function $$\Omega \left(w\right)$$ is

$$\Omega \left(w\right)=\frac{1}{2}{w}^{T}w.$$

Note that the biases are not regularized [2]. You can specify the regularization factor $$\lambda $$ by using the `L2Regularization`

training option. You can also specify different regularization factors for different layers and parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.

The loss function that the software uses for network training includes the regularization term. However, the loss value displayed in the command window and training progress plot during training is the loss on the data only and does not include the regularization term.

## References

[1] Bishop, C. M. *Pattern Recognition
and Machine Learning*. Springer, New York, NY, 2006.

[2] Murphy, K. P. *Machine Learning:
A Probabilistic Perspective*. The MIT Press, Cambridge,
Massachusetts, 2012.

[3] Pascanu, R., T. Mikolov,
and Y. Bengio. "On the difficulty of training recurrent neural networks".
*Proceedings of the 30th International Conference on Machine
Learning*. Vol. 28(3), 2013, pp. 1310–1318.

[4] Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." *arXiv preprint arXiv:1412.6980* (2014).

## Version History

**Introduced in R2016a**

### R2018b: `ValidationPatience`

training option default is `Inf`

*Behavior changed in R2018b*

Starting in R2018b, the default value of the `ValidationPatience`

training option is
`Inf`

, which means that automatic stopping via
validation is turned off. This behavior prevents the training from stopping
before sufficiently learning from the data.

In previous versions, the default value is `5`

. To
reproduce this behavior, set the `ValidationPatience`

option to `5`

.

### R2018b: Different file name for checkpoint networks

*Behavior changed in R2018b*

Starting in R2018b, when saving checkpoint networks, the software assigns
file names beginning with `net_checkpoint_`

. In previous
versions, the software assigns file names beginning with
`convnet_checkpoint_`

.

If you have code that saves and loads checkpoint networks, then update your code to load files with the new name.

## See Also

`trainNetwork`

| `analyzeNetwork`

| Deep Network
Designer

### Topics

- Create Simple Deep Learning Network for Classification
- Transfer Learning Using Pretrained Network
- Resume Training from Checkpoint Network
- Deep Learning with Big Data on CPUs, GPUs, in Parallel, and on the Cloud
- Specify Layers of Convolutional Neural Network
- Set Up Parameters and Train Convolutional Neural Network
- Define Custom Training Loops, Loss Functions, and Networks

## Open Example

You have a modified version of this example. Do you want to open this example with your edits?

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

# Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)