Conjugate gradient backpropagation with Polak-Ribiére updates
net.trainFcn = 'traincgp'
[net,tr] = train(net,...)
traincgp is a network training function that updates weight and bias
values according to conjugate gradient backpropagation with Polak-Ribiére updates.
net.trainFcn = 'traincgp' sets the network
[net,tr] = train(net,...) trains the network with
Training occurs according to
traincgp training parameters, shown here
with their default values:
Maximum number of epochs to train
Epochs between displays (
Generate command-line output
Show training GUI
Maximum time to train in seconds
Minimum performance gradient
Maximum validation failures
Name of line search routine to use
Parameters related to line search methods (not all used for all methods):
Scale factor that determines sufficient reduction in
Scale factor that determines sufficiently large step size
Initial step size in interval location step
Parameter to avoid small reductions in performance, usually set to
Lower limit on change in step size
Upper limit on change in step size
Maximum step length
Minimum step length
Maximum step size
You can create a standard network that uses
cascadeforwardnet. To prepare a custom
network to be trained with
'traincgp'. This sets
traincgp’s default parameters.
net.trainParamproperties to desired values.
In either case, calling
train with the resulting network trains the
Train Neural Network Using
traincgp Train Function
This example shows how to train a neural network using the
traincgp train function.
Here a neural network is trained to predict body fat percentages.
[x, t] = bodyfat_dataset; net = feedforwardnet(10, 'traincgp'); net = train(net, x, t);
y = net(x);
Conjugate Gradient Backpropagation with Polak-Ribiére Updates
Another version of the conjugate gradient algorithm was proposed by Polak and Ribiére. As with the
traincgf, the search direction at each iteration
is determined by
For the Polak-Ribiére update, the constant βk is computed by
This is the inner product of the previous change in the gradient with the current gradient divided by the norm squared of the previous gradient. See [FlRe64] or [HDB96] for a discussion of the Polak-Ribiére conjugate gradient algorithm.
traincgp routine has performance similar to
traincgf. It is difficult to predict which algorithm will perform best on a
given problem. The storage requirements for Polak-Ribiére (four vectors) are slightly
larger than for Fletcher-Reeves (three vectors).
traincgp can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
with respect to the weight and bias variables
X. Each variable is adjusted
according to the following:
X = X + a*dX;
dX is the search direction. The parameter
selected to minimize the performance along the search direction. The line search function
searchFcn is used to locate the minimum point. The first search direction is
the negative of the gradient of performance. In succeeding iterations the search direction is
computed from the new gradient and the previous search direction according to the formula
dX = -gX + dX_old*Z;
gX is the gradient. The parameter
Z can be
computed in several different ways. For the Polak-Ribiére variation of conjugate gradient,
it is computed according to
Z = ((gX - gX_old)'*gX)/norm_sqr;
norm_sqr is the norm square of the previous gradient, and
gX_old is the gradient on the previous iteration. See page 78 of Scales
(Introduction to Non-Linear Optimization, 1985) for a more detailed
discussion of the algorithm.
Training stops when any of these conditions occurs:
The maximum number of
epochs(repetitions) is reached.
The maximum amount of
Performance is minimized to the
The performance gradient falls below
Validation performance (validation error) has increased more than
max_failtimes since the last time it decreased (when using validation).
Scales, L.E., Introduction to Non-Linear Optimization, New York, Springer-Verlag, 1985
Introduced before R2006a