Gradient descent backpropagation
net.trainFcn = 'traingd' sets the network
traingd is a network training function that updates weight and bias
values according to gradient descent.
Training occurs according to
traingd training parameters, shown here
with their default values:
net.trainParam.epochs — Maximum number of epochs to train. The
default value is 1000.
net.trainParam.goal — Performance goal. The default value is
net.trainParam.lr — Learning rate. The default value is
net.trainParam.max_fail — Maximum validation failures. The
default value is
net.trainParam.min_grad — Minimum performance gradient. The
default value is
net.trainParam.show — Epochs between displays
NaN for no displays). The default value is 25.
net.trainParam.showCommandLine — Generate command-line output.
The default value is
net.trainParam.showWindow — Show training GUI. The default
net.trainParam.time — Maximum time to train in seconds. The
default value is
trainedNet— Trained network
Trained network, returned as a
tr— Training record
Training record (
perf), returned as
a structure whose fields depend on the network training function
net.NET.trainFcn). It can include fields such as:
Training, data division, and performance functions and parameters
Data division indices for training, validation and test sets
Data division masks for training validation and test sets
Number of epochs (
num_epochs) and the best epoch
A list of training state names (
Fields for each state name recording its value throughout training
Performances of the best network (
You can create a standard network that uses
cascadeforwardnet. To prepare a
custom network to be trained with
'traingd'. This sets
net.trainParam properties to desired values.
In either case, calling
train with the resulting network trains the
help feedforwardnet and
The batch steepest descent training function is
weights and biases are updated in the direction of the negative gradient of the performance
function. If you want to train a network using batch steepest descent, you should set the
traingd, and then call the
train. There is only one training function associated with a
There are seven training parameters associated with
The learning rate
lr is multiplied times the negative of the gradient
to determine the changes to the weights and biases. The larger the learning rate, the bigger
the step. If the learning rate is made too large, the algorithm becomes unstable. If the
learning rate is set too small, the algorithm takes a long time to converge. See page 12-8
of [HDB96] for a discussion of the choice of learning rate.
The training status is displayed for every
show iterations of the
show is set to
NaN, then the training
status is never displayed.) The other parameters determine when the training stops. The
training stops if the number of iterations exceeds
epochs, if the
performance function drops below
goal, if the magnitude of the gradient
is less than
mingrad, or if the training time is longer than
max_fail, which is associated with the
early stopping technique, is discussed in Improving Generalization.
The following code creates a training set of inputs
p and targets
t. For batch training, all the input vectors are placed in one
p = [-1 -1 2 2; 0 5 0 5]; t = [-1 -1 1 1];
Create the feedforward network.
net = feedforwardnet(3,'traingd');
In this simple example, turn off a feature that is introduced later.
net.divideFcn = '';
At this point, you might want to modify some of the default training parameters.
net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.epochs = 300; net.trainParam.goal = 1e-5;
If you want to use the default training parameters, the preceding commands are not necessary.
Now you are ready to train the network.
[net,tr] = train(net,p,t);
The training record
tr contains information about the progress of
Now you can simulate the trained network to obtain its response to the inputs in the training set.
a = net(p) a = -1.0026 -0.9962 1.0010 0.9960
Try the Neural Network Design
nnd12sd1 [HDB96] for an illustration of the performance of the batch
gradient descent algorithm.
traingd can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
with respect to the weight and bias variables
X. Each variable is adjusted
according to gradient descent:
dX = lr * dperf/dX
Training stops when any of these conditions occurs:
The maximum number of
epochs (repetitions) is reached.
The maximum amount of
time is exceeded.
Performance is minimized to the
The performance gradient falls below
Validation performance has increased more than
max_fail times since
the last time it decreased (when using validation).