How to achieve good results in MLP
1 view (last 30 days)
Show older comments
Hi,
I am trying to implement neural network for my speech matching project.
There are 12 feature per sample so,I have input feature matrix of 12*54.
Number of classes is 6 so size of Output matrix is 6*54.
My code is as under
[Pr,Pc] = size(P);
out=9;
T=zeros(6,54);
for i=1:6
T(i,(((i-1)*9)+1):out)=1;
out= out+9;
end
[Tr,Tc] = size(T);
Neq=Tc*Tr;
L1=(Neq-Tr)/(Pr+Tr+1);
L1=floor(L1);
net0to9 = newff(minmax(P),[L1 Tr],{'logsig' 'logsig'},'traingdx');
net0to9.performFcn = 'mse';
net0to9.trainParam.goal =mean(var(T))/100;
net0to9.trainParam.show = 20;
net0to9.trainParam.epochs =1200; %I have tried many different no. of epochs
[net0to9,tr,Y,E] = train(net0to9,P,T);
When I train the network it sometimes reach to maximum goal but when i train it again with same parameters it reaches to maximum epoch. But the results are almost same. what does that mean?
Can anyone help me specifying epochs parameter?
The results are close to Target output when I test using
x = sim(net0to9,P);
but it does nto give good results at all when i test on some new data.
Can anyone help???
Thanks in advance.
0 Comments
Accepted Answer
Greg Heath
on 21 Nov 2011
for i = 1: 6
T(i,1+9*(i-1):1+9*i) = ones(1,9); % vectorized version
end
You replaced Neq >> Nw with Neq = Nw which does not generalize well to nontraining data.
You did not initialize rand before calling newff. Therefore, repeat runs will have different random initial weights.
When using newff and other functions, use as many defaults as possible (e.g., trainlm).
I recommend standardizing inputs to zero-mean/unit-variance and using 'tansig' for the hidden layer.
Replace mean(var(T))/100 (= 0.0017) with mean(var(T')')/100 (= 0.0014)
Hope this helps.
Greg
More Answers (1)
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!