Layer-structure prediction of fitrnet not yielding same answer as predict(Model,input)
73 views (last 30 days)
Show older comments
Greetings,
I've created a single-layer regression neural network using the fitrnet command. The network has 3 inputs and 1 hidden layer comprised of 2 nodes. The activation is rectified linear unit.
In trying to understand how all the layers link, I followed the instructions at the link Predict Using Layer Structure of Regression Neural Network Model. The result I get using that flow do not agree with the prediction made using the predict tool, nor do they agree with my hand-written math.
Input = [30.9 30.7 0.244]
Mdl.LayerWeights{1} = [0.9914 1.4559 -1.0186; 10.2493 -7.8115 -6.3606]
Mdl.LayerBiases{1} = [-1.6804 0.3631]
Mdl.LayerWeights{end} = [-1.1448 3.3748]
Mdl.LayerBiases{end} = [8.1000]
Predict(Mdl,[30.9 30.7 0.244]) = 21.7515
firststep=(Mdl.LayerWeights{1})*Input'+Mdl.LayerBiases{1} = [73.4008; 75.7016]
relustep=max(firststep,0) = [73.4008; 75.7016]
finalstep=(Mdl.LayerWeights{end})*relustep+Mdl.LayerBiases{end} = 179.5503
I get the same value as finalstep when I do the math by hand. The "Predict" response is the correct response as compared to my actual result.
I'm at a loss at this point, and I'm not sure where to look in the workspace for what might have gone wrong. Are there any dot-notation properties I should inspect? Thank you in advance.
4 Comments
Sam Chak
on 17 Oct 2025 at 8:04
Hi @Kevin
I believe that @Anjaneyulu Bairi attempted to assist you by requesting that you provide the complete executable code, as shown in the example below, so that the prediction deviation issue can be thoroughly investigated by users in this forum. You can click the indent icon
to toggle the coding field and paste the code. Then, click the Run icon
to execute the code in the browser.
Example:
load patients
tbl = table(Diastolic, Height, Smoker, Weight, Systolic);
rng("default")
c = cvpartition(size(tbl, 1), "Holdout", 0.30);
trainingIndices = training(c);
testIndices = test(c);
tblTrain = tbl(trainingIndices, :);
tblTest = tbl(testIndices, :);
Mdl = fitrnet(tblTrain, "Systolic", "Standardize", true, "IterationLimit", 50);
predictedY = predict(Mdl, tblTest);
plot(tblTest.Systolic, predictedY, ".")
hold on
plot(tblTest.Systolic, tblTest.Systolic)
hold off
xlabel("True Systolic Blood Pressure Levels")
ylabel("Predicted Systolic Blood Pressure Levels")
Accepted Answer
Sam Chak
on 28 Oct 2025 at 9:11
Hi @Kevin
The reason for the discrepancy is that you have set "Standardize" to true, which standardizes the predictor data by centering and scaling each numeric predictor variable. If you leave "Standardize" at its default setting, both predictions will match each other.
T = readtable("datamatlabhelp.xlsx", VariableNamingRule="preserve")
%% Data
Data=xlsread('datamatlabhelp.xlsx');
input1=[Data(:,1)];
input2=[Data(:,2)];
input3=[Data(:,3)];
output=[Data(:,4)];
X=[input1 input2 input3];
Y=output;
%% Training of Neural Nets
rng("default");
c=cvpartition(length(Y),"Holdout",0.2);
trainingIdx = training(c);
XTrain = X(trainingIdx,:);
YTrain = Y(trainingIdx);
testIdx = test(c);
XTest = X(testIdx,:);
YTest = Y(testIdx);
Mdl = fitrnet(XTrain, YTrain, "Activations", "relu", "LayerSizes", 6)
testMSE = loss(Mdl, XTest, YTest)
%% Test prediction
Input = [30.9 30.7 0.244];
firststep = Mdl.LayerWeights{1}*Input' + Mdl.LayerBiases{1};
relustep = max(firststep, 0);
finalstep = Mdl.LayerWeights{end}*relustep + Mdl.LayerBiases{end}
predictedY = predict(Mdl, Input)
% Test if the prediction by formula matches the one returned by the predict() object function
isequal(finalstep, predictedY)
More Answers (1)
Sam Chak
on 28 Oct 2025 at 11:29
Hi @Kevin
If you want to use the satisfactorily trained neural network models with standardized inputs, you will need to apply the standardized inputs instead of the original input values. The formula remains unchanged.
T = readtable("datamatlabhelp.xlsx", VariableNamingRule="preserve")
%% Data
Data = xlsread('datamatlabhelp.xlsx');
input1 = [Data(:,1)];
input2 = [Data(:,2)];
input3 = [Data(:,3)];
output = [Data(:,4)];
X = [input1 input2 input3];
Y = output;
%% Training of Neural Nets
rng("default");
c = cvpartition(length(Y), "Holdout", 0.2);
trainingIdx = training(c);
XTrain = X(trainingIdx,:);
YTrain = Y(trainingIdx);
testIdx = test(c);
XTest = X(testIdx,:);
YTest = Y(testIdx);
Mdl = fitrnet(XTrain, YTrain, "Standardize", true, "Activations", "relu", "LayerSizes", 6)
testMSE = loss(Mdl, XTest, YTest)
%% Test prediction
Input = [30.9, 30.7, 0.244]; % original input values
muX = mean(XTrain) % mean
sigmaX = std(XTrain) % standard deviation
InputStd = (Input - muX)./sigmaX % center and scale the input data
% Prediction using the formula by applying the standardized input
firststep = Mdl.LayerWeights{1}*InputStd' + Mdl.LayerBiases{1};
relustep = max(firststep, 0);
finalstep = Mdl.LayerWeights{end}*relustep + Mdl.LayerBiases{end}
% Prediction using the predict() object function
predictedY = predict(Mdl, Input)
% Test if the prediction by formula approximately matches the one returned by the predict() function
TF = isapprox(finalstep, predictedY, AbsoluteTolerance=1e-10)
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!