Bayesian regularization backpropagation
net.trainFcn = 'trainbr' sets the network
trainbr is a network training function that updates the weight and bias
values according to Levenberg-Marquardt optimization. It minimizes a combination of squared
errors and weights, and then determines the correct combination so as to produce a network that
generalizes well. The process is called Bayesian regularization.
Training occurs according to
trainbr training parameters, shown here
with their default values:
net.trainParam.epochs— Maximum number of epochs to train. The default value is 1000.
net.trainParam.goal— Performance goal. The default value is 0.
net.trainParam.mu— Marquardt adjustment parameter. The default value is 0.005.
net.trainParam.mu_dec— Decrease factor for
mu. The default value is 0.1.
net.trainParam.mu_inc— Increase factor for
mu. The default value is 10.
net.trainParam.mu_max— Maximum value for mu. The default value is
net.trainParam.max_fail— Maximum validation failures. The default value is
net.trainParam.min_grad— Minimum performance gradient. The default value is
net.trainParam.show— Epochs between displays (
NaNfor no displays). The default value is 25.
net.trainParam.showCommandLine— Generate command-line output. The default value is
net.trainParam.showWindow— Show training GUI. The default value is
net.trainParam.time— Maximum time to train in seconds. The default value is
Validation stops are disabled by default (
max_fail = inf) so that
training can continue until an optimal combination of errors and weights is found. However,
some weight/bias minimization can still be achieved with shorter training times if validation
is enabled by setting
max_fail to 6 or some other strictly positive
Train Network with
This example shows how to solve a problem consisting of inputs
p and targets
t by using a network. It involves fitting
a noisy sine wave.
p = [-1:.05:1]; t = sin(2*pi*p)+0.1*randn(size(p));
A feed-forward network is created with a hidden layer of 2 neurons.
net = feedforwardnet(2,'trainbr');
Here the network is trained and tested.
net = train(net,p,t); a = net(p)
trainedNet — Trained network
Trained network, returned as a
tr — Training record
Training record (
perf), returned as a
structure whose fields depend on the network training function
net.NET.trainFcn). It can include fields such as:
Training, data division, and performance functions and parameters
Data division indices for training, validation and test sets
Data division masks for training validation and test sets
Number of epochs (
num_epochs) and the best epoch (
A list of training state names (
Fields for each state name recording its value throughout training
Performances of the best network (
This function uses the Jacobian for calculations, which assumes that performance is a mean
or sum of squared errors. Therefore networks trained with this function must use either the
sse performance function.
You can create a standard network that uses
cascadeforwardnet. To prepare a custom
network to be trained with
'trainbr'. This sets
trainbr’s default parameters.
NET.trainParamproperties to desired values.
In either case, calling
train with the resulting network trains the
cascadeforwardnet for examples.
trainbr can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Bayesian regularization minimizes a linear combination of squared errors and weights. It also modifies the linear combination so that at the end of training the resulting network has good generalization qualities. See MacKay (Neural Computation, Vol. 4, No. 3, 1992, pp. 415 to 447) and Foresee and Hagan (Proceedings of the International Joint Conference on Neural Networks, June, 1997) for more detailed discussions of Bayesian regularization.
This Bayesian regularization takes place within the Levenberg-Marquardt algorithm.
Backpropagation is used to calculate the Jacobian
jX of performance
perf with respect to the weight and bias variables
Each variable is adjusted according to Levenberg-Marquardt,
jj = jX * jX je = jX * E dX = -(jj+I*mu) \ je
E is all errors and
I is the identity
The adaptive value
mu is increased by
the change shown above results in a reduced performance value. The change is then made to the
mu is decreased by
Training stops when any of these conditions occurs:
The maximum number of
epochs(repetitions) is reached.
The maximum amount of
Performance is minimized to the
The performance gradient falls below
 MacKay, David J. C. "Bayesian interpolation." Neural computation. Vol. 4, No. 3, 1992, pp. 415–447.
 Foresee, F. Dan, and Martin T. Hagan. "Gauss-Newton approximation to Bayesian learning." Proceedings of the International Joint Conference on Neural Networks, June, 1997.