learngdm
Gradient descent with momentum weight and bias learning function
Syntax
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngdm('code
')
Description
learngdm
is the gradient descent with momentum weight and bias learning
function.
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
Learning occurs according to learngdm
’s learning parameters, shown here
with their default values.
LP.lr - 0.01 | Learning rate |
LP.mc - 0.9 | Momentum constant |
info = learngdm('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Examples
Here you define a random gradient G
for a weight going to a layer with
three neurons from an input with two elements. Also define a learning rate of 0.5 and momentum
constant of 0.8:
gW = rand(3,2); lp.lr = 0.5; lp.mc = 0.8;
Because learngdm
only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so. Use the default initial learning
state.
ls = []; [dW,ls] = learngdm([],[],[],[],[],[],[],gW,[],[],lp,ls)
learngdm
returns the weight change and a new learning state.
Algorithms
learngdm
calculates the weight change dW
for a given
neuron from the neuron’s input P
and error E
, the weight
(or bias) W
, learning rate LR
, and momentum constant
MC
, according to gradient descent with momentum:
dW = mc*dWprev + (1-mc)*lr*gW
The previous weight change dWprev
is stored and read from the learning
state LS
.
Version History
Introduced before R2006a