Batch self-organizing map weight learning function
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsomb('
learnsomb is the batch self-organizing map weight learning
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs:
Learning parameters, none,
Learning state, initially should be =
and returns the following:
New learning state
Learning occurs according to
learnsomb’s learning parameter, shown here
with its default value:
Initial neighborhood size
Ordering phase steps
info = learnsomb(' returns useful
information for each
code character vector:
Returns names of learning parameters.
Returns default learning parameters.
This example defines a random input
W for a layer with a 2-element input and 6 neurons. This
example also calculates the positions and distances for the neurons, which appear in a 2-by-3
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp = learnsomb('pdefaults');
learnsom only needs these values to calculate a weight change
ls = ; [dW,ls] = learnsomb(w,p,,,a,,,,,d,lp,ls)
You can create a standard network that uses
selforgmap. To prepare the weights of layer i of a custom network to learn
trainr’s default parameters.)
trains’s default parameters.)
'learnsomb'. (Each weight learning parameter property is automatically set to
learnsomb’s default parameters.)
To train the network (or enable it to adapt):
NET.adaptParam) properties as desired.
learnsomb calculates the weight changes so that each neuron’s new weight
vector is the weighted average of the input vectors that the neuron and neurons in its
neighborhood responded to with an output of 1.
The ordering phase lasts as many steps as
During this phase, the neighborhood is gradually reduced from a maximum size of
LP.init_neighborhood down to
1, where it remains from
Introduced in R2008a