Discrete regression plot of neural networks in matlab

8 views (last 30 days)
Hi, I have 31 inputs, and 11 output. 600 sample size. Every output has 3 levels' value (high value, medicate value and low value).I used NNs fitting to predict the output.The regression diagram turns out to be like the pic1.However, when I changed the output function to be logistic function, it turns out to be pic2. I wonder if the transfer function can help to transfer the discrete values into continuous? It really doesn't matter with layers of NNs, number of neurons and ratio of training data, as I tried many combination of them. Except for the logistic function for output layer shows in pic2, others are showed similar as pic1. Also, I tried pattern recognition. However, my outputs are too many, 11 * 3. I cannot get the good confusion plot. Any suggestion with this problem? Should I go with the fitting or pattern recognition? Thank you.
  2 Comments
Greg Heath
Greg Heath on 29 Nov 2014
When the output transfer function was purelin, what 3 numerical target values are associated with high, medium (note spelling) and low?
I am confused: You have more than 3 target values on your plots
Which logistic output function did you use? tansig or logsig? What 3 values?
More explanation is needed. Especially the syntax of the target matrix.
Rain
Rain on 15 Dec 2014
  1. Sorry for the late reply and thank you for your help.
  2. I have 11 outputs, and for each of them, I have 3 numerical target values are associated with high, medium and low. It is why that you see more than 3 targets. It should be 3*11=33 targets.
  3. I tried both (tansig and logsig) for my hidden layer. I actually tried different combinations of active functions (including purelin) for both hidden layer and output layers. I tried different numbers of neurons and different numbers of hidden layers (1 and 2). Different percentage of training, validation and testing data size.
  4. Different outputs have different low, medium and high values (3 values), and their range are 0~300.
  5. I have 31 inputs. One is gender (coded as 0 and 1), another 30 inputs are 1~5 scores based on likert scale.
  6. This is my syntax (the number of hidden layers and active function can be changed). My results are almost the same as the first picture. I am so confused about it:
inputs = input';
targets = output';
% Create a Fitting Network
hiddenLayerSize = [31,31];
net = fitnet(hiddenLayerSize);
net.layers{1}.transferFcn = 'logsig';
net.layers{2}.transferFcn = 'logsig';
net.layers{3}.transferFcn = 'purelin';
% net.layers{4}.transferFcn = 'purelin'
% Set up Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(outputs,targets);
performance = perform(net,targets,outputs)
% View the Network
view(net)
% Uncomment the following lines to enable various plots.
% figure, plotperform(tr)
% figure, plottrainstate(tr)
% figure, plotfit(targets,outputs)
% figure, plotregression(targets,outputs)
% figure, ploterrhist(errors)

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 18 Dec 2014
My understanding is that you have 600 examples with 31 continous inputs and 11 discrete outputs. Each discrete output has 3 levels.
I suggest transforming each input to the continuous range [-1,1] and each output to have the discrete values {-1,0,1}.
With the default 0.7/0.15/0.15 fitnet data division, the number of training examples and equations is
Ntrn = 600-2*round(0.15*600) = 420
Ntrneq = Ntrn*11 = 4620
With a 31-H-11 topology the number of weights is
Nw = (31+1)*H+(H+1)*11= 43*H+11
For a robust design the criterion Ntrneq >> Nw yields
H << 107
I would start by obtaining ~100 candidate nets with a double loop design of 10 random weight initializations for each of h = 5:5:50 values for the number of hidden nodes.
tansig/purelin should be sufficient. Rounding the output should recover { -1,0,1}
Hope this helps.
Thank you for formerly accepting my answer
Greg
  2 Comments
Rain
Rain on 28 Dec 2014
Hi Greg, I did what you suggested. The Training regression is really good. Please refer to the picture.
However, the test R is quite low. It's around 0.3. I wonder why? How can I deal with it? Also, do you think my regression plot is acceptable? Usually, the dots are round the line, but mine is only three piles. Thank you.
salah mahdi
salah mahdi on 19 Jan 2016
Hi Greg Heath, Many thanks for your effort.
Do you have any reference we can use for your suggestion (I suggest transforming each input to the continuous range [-1,1] and each output to have the discrete values {-1,0,1}.)

Sign in to comment.

More Answers (1)

Greg Heath
Greg Heath on 17 Dec 2014
Scale all 11 targets to 3 discrete values -1,0,1
Use purelin and round the outputs
Hope this helps.
Thank you for formerly accepting my answer
Greg
  2 Comments
Rain
Rain on 17 Dec 2014
Edited: Rain on 18 Dec 2014
I tried your suggestion, I attached the syntax for adding a customize transform function to matlab neural networks tool box. I use this function
function add_trfcn(fname)
open_system('neural');
open_system('neural/Transfer Functions');
set_param('neural','Lock','off')
add_block('simulink/User-Defined Functions/Interpreted MATLAB Function',['neural/Transfer' char(13) 'Functions/',fname], 'matlabfcn',fname,'name',fname)
and add my function using the following code
% Solve an Input-Output Fitting problem with a Neural Network
% Script generated by NFTOOL
%
% This script assumes these variables are defined:
% You should use your data here, so substutute
%add_trfcn('customFunction')
inputs = input';
targets = transformedOutput';
% Create a Fitting Network
hiddenLayerSize = [31,31];
net = fitnet(hiddenLayerSize);
net.layers{1}.transferFcn = 'logsig';
net.layers{2}.transferFcn = 'logsig';
net.layers{3}.transferFcn = 'customFunction';
% net.layers{4}.transferFcn = 'purelin'
% Set up Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(outputs,targets);
performance = perform(net,targets,outputs)
% View the Network
view(net)
% Uncomment the following lines to enable various plots.
% figure, plotperform(tr)
% figure, plottrainstate(tr)
% figure, plotfit(targets,outputs)
% figure, plotregression(targets,outputs)
% figure, ploterrhist(errors)
where customFunction is a function defined as follows:
function y = customFunction(x)
[m, n]= size(x);
y=zeros(m,n);
for i = 1:m
for j= 1:n
dminus1 = abs(-1-x(i,j));
dzero = abs(x(i,j));
done =abs(1-x(i,j));
if(min(dminus1, min(dzero,done))== dminus1)
y(i,j) = -1;
elseif(min(dminus1, min(dzero,done))== dzero)
y(i,j) = 0;
else
y(i,j) = 1;
end
end
end
but I get the following error
>> NonnormailizedNNs
Error using struct
Conversion to struct from double is not possible.
Error in network/subsasgn>getDefaultParam (line 2048)
param = struct(feval(fcn,'defaultParam'));
Error in network/subsasgn>setLayerTransferFcn (line 1224)
net.layers{i}.transferParam = getDefaultParam(transferFcn);
Error in network/subsasgn>network_subsasgn (line 208)
if isempty(err), [net,err] = setLayerTransferFcn(net,i,transferFcn); end
Error in network/subsasgn (line 13)
net = network_subsasgn(net,subscripts,v,netname);
Error in NonnormailizedNNs (line 16)
net.layers{3}.transferFcn = 'customFunction';
Probably I made mistake for the customFunction. Could you please help me with this one? thank you very much.
Greg Heath
Greg Heath on 20 Jan 2016
Why don't you just use te classifier function patternnet with 3 dimensional outputs from the columns of the 3-dimensional {0,1} unit matrix eye(3).

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!