Artificial neural network - weights & biases - prediction of output manually

10 views (last 30 days)
Dear sir, I have used NFtool in MATLAB R2013a version. The network has 4 input neurons,10 hidden neurons and 1 output neuron (4:10:1). After training weights and biases are as follows: w1=[0.750864871929357,-1.04662070128428,0.555215506062414,-2.41372663384; -1.33674312135999,-1.85497432421788,0.171644014211756,-1.00901376117434; -1.17697464915732,-1.08381929419222,2.01867242513843,0.385346102708722; -0.0982385098389994,1.43234967610556,-1.41891080319675,-2.09657813780530; 1.08226078842391,-1.36558283863748,1.20315995998298,1.33386725200220; -1.44768438970138,1.42745137761629,-0.548723980920127,1.52251782685021; -0.230162677732705,-1.06104686232403,2.13730692111303,0.0887419171415393; -1.56156708761931,0.954115050802265,1.54060918006507,-0.37101963949; 1.10848762448537,-0.547779828726692,0.623497558837630,-2.08211223923105; -0.796185938428902,-1.87122284630215,-0.667945823175087,-0.3641647426447];
w2=[-1.28202522701169,-0.542561923373967,1.19938712052237,0.790702378108044,0.369732822823546,0.135892694919764,0.604111142205213,-0.00267333272566883,-0.176734563355046,0.637073092209164];
b1=[-2.13336267394184;1.92619564521441;1.70251721423456;0.930601278940955;0.0521504054343906;-0.766964385399286;-1.16952707663656;-1.86958140128970;1.81604428796975;-3.22036974290436];
b2=[-0.165085306853076];
Based on these weights and biases, I have predicted the output (z) by using the following equation….
z =((b2)+(w2*tansig(b1+w1*x')));
while doing so, for one of the sets (200 150 30 10), the output is = -0.0694 which is not equal to the actual value of 61.5
But the same thing if I do simulate function and calculate the value, the output is = 61.53 which is close to the actual value of 61.5.
Why is this discrepancy? I tried with mampminimax function also….but no use! Can anyone help me?
  1 Comment
Greg Heath
Greg Heath on 17 Dec 2014
Edited: Greg Heath on 17 Dec 2014
Need details about your code, not your weight values. How many examples for training, validation and testing? List your command line training record
tr = tr (No semicolon)
In particular, what are tr.perf etc.

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 18 Dec 2014
99.9% of the time when the equation does not agree with the net output, the input normalization and/or the output de-normalization is incorrectly applied.
Hope this helps.
Thank you for formally accepting my answer
Greg
  4 Comments
Image Analyst
Image Analyst on 20 Dec 2014
Raja's "Answer" moved here:
Dear sir, I have used NFtool in MATLAB R2013a version. The network has 4 input neurons,10 hidden neurons and 1 output neuron (4:10:1). The data was 30 and Training:Testing:Validation = 20:5:5. The data are as follows:
Exp No: Input Output 1 Train 100 150 30 10 83.8 2 Train 200 150 30 10 61.5 3 Train 100 300 30 10 80 4 Train 200 300 30 10 57.2 5 Test 100 150 50 10 97.4 6 Test 200 150 50 10 90.2 7 Val 100 300 50 10 98.1 8 Train 200 300 50 10 92.4 9 Val 100 150 30 30 96.6 10 Train 200 150 30 30 78.8 11 Train 100 300 30 30 97.4 12 Train 200 300 30 30 82.8 13 Train 100 150 50 30 97.6 14 Train 200 150 50 30 91.6 15 Train 100 300 50 30 99.4 16 Val 200 300 50 30 95.8 17 Val 50 225 40 20 99.1 18 Test 250 225 40 20 90.1 19 Train 150 75 40 20 88.2 20 Train 150 375 40 20 96.8 21 Val 150 225 20 20 57.9 22 Train 150 225 60 20 96.3 23 Train 150 225 40 0 50.7 24 Test 150 225 40 40 98.2 25 Train 150 225 40 20 95.1 26 Train 150 225 40 20 96.4 27 Test 150 225 40 20 97.3 28 Train 150 225 40 20 97.1 29 Train 150 225 40 20 97.1 30 Train 150 225 40 20 96
After getting the satisfactory MSE and R2….I made the tool to create the “advanced script” of the program and it created the code as follows: % Solve an Input-Output Fitting problem with a Neural Network % Script generated by NFTOOL % Created Thu Dec 18 04:37:32 IST 2014 % % This script assumes these variables are defined: % % IP - input data. % OP - target data.
inputs = IP'; targets = OP';
% Create a Fitting Network hiddenLayerSize = 10; net = fitnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions % For a list of all processing functions type: help nnprocess net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing % For a list of all data division functions type: help nndivide net.divideFcn = 'dividerand'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm % For a list of all training functions type: help nntrain net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function % For a list of all performance functions type: help nnperformance net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions % For a list of all plot functions type: help nnplot net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};
% Train the Network [net,tr] = train(net,inputs,targets);
% Test the Network outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1}; testTargets = targets .* tr.testMask{1}; trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)
% View the Network view(net)
% Plots % Uncomment these lines to enable various plots. %figure, plotperform(tr) z%figure, plottrainstate(tr) %figure, plotfit(net,inputs,targets) %figure, plotregression(targets,outputs) %figure, ploterrhist(errors)
When I typed this following command to get the output of the TrialNo8, i.e , [200 300 50 10], I got the answer as 92.0084 which is close to the actual value of 92.4 TrialNo8=sim(Greg.net,[200 300 50 10]') TrialNo8 = 92.0084 At the same time, I retrieved the weights and biases from the above code and written a mfile with them. When I try to find the answer of the same Input, I got the answer as -2.2861. Why is this discrepancy? Kindly help! GrgeANNwt ([200 300 50 10]) ans = -2.2861
Irfan Majid
Irfan Majid on 24 Aug 2019
Dear Mr Heath,
Although you added this answer a long time back. Yet I was really stuck up for some time now trying to get right answers from stored weights and biases because of not normalizing the inputs. Your answer solved it for me. Bless you.
Irfan

Sign in to comment.

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!