- https://www.mathworks.com/help/deeplearning/ref/narxnet.html
- https://www.mathworks.com/help/deeplearning/ref/preparets.html
Machine Learning narx net optimization
3 views (last 30 days)
Show older comments
Im am trying to predict a time serie from some data with a narx model.
I fit my model with 8 inputs (I also tried to vary that but it had not an impact on the result) and try to get an prediction model.
My trainingsdatas are:
Input [180000x8]
Ziel [180000x6] (or in english Target)
The goal is to predict the 6 targets.
An example what I mean with some random Numbers
Ziel=[1.0 0.5 2.0 1.0 2.0 1.5 sum[8.0
2.0 0.5 1.0 2.0 1.0 0.5 7.0
1.5 1.5 2.0 1.5 1.0 1.5 9.0
1.0 1.0 1.5 2.0 2.5 0.2 8.2
... ] ... ]
The main goal is that the sum of the prediciton is correct in each row but I also need the ratio in each row. Thats why I fit the model that way.
I tried to vary the delays and the hidden units. Also I tried it with normalization and with regularization (without normalization and regularization the calculation time increase enormously)
The "best" result of the below following code is (red is the prediction: sum(y_2n) and cyan the target: sum(Ziel))

This is my code: (I am using Matlab 2014a)
for j=3:100
X = tonndata(Input,false,false);
T = tonndata(Ziel,false,false);
trainFcn = 'trainlm';
inputDelays = 1:8;
feedbackDelays = 1:8;
hiddenLayerSize = j;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize,'open',trainFcn);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
[x,xi,ai,t] = preparets(net,X,{},T);
net.divideFcn = 'divideind';
net.divideMode = 'time';
Train_paramter=X_03_01_2008_date;
Valid_parameter=X_03_01_2009_date;
net.divideParam.trainInd = 1:Train_paramter;
net.divideParam.valInd = (Train_paramter+1):Valid_parameter;
net.divideParam.testInd = Valid_parameter+1:185440;
net.performFcn = 'mse';
net.performParam.regularization = 0.5; %or without regularization
net.performParam.normalization='percent';%nothing or 'standart'
net.layers{1}.transferFcn='tansig';
net.layers{2}.transferFcn='purelin';
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
[net,tr] = train(net,x,t,xi,ai);
[y,xfo,afo] = net(x,xi,ai);
%% Closed Loop Network
[netc,xic,aic] = closeloop(net,xfo,afo);
[y2,xfc,afc] = netc(X,xic,aic);
view(netc)
y2_ns=cell2mat(y2);
figure, plotregression(Ziel,y2_ns');
figure(j+100)
plot(sum(Ziel')','c')
hold on
plot(sum(y2_ns),'r')
saveas(figure(j+100),sprintf('s100%d.png',j))
SaveName = sprintf('y2_n_2004_2010%d',j);
save(SaveName, 'y2_ns');
SaveName = sprintf('net%d',j);
save(SaveName, 'net');
net = init(net);
end
0 Comments
Answers (1)
sanidhyak
on 4 Jun 2025
I understand that you are trying to train a "NARX" model using input data of shape "[180000x8]" and target data "[180000x6]" to predict multivariate time series values. You want to ensure that the sum of each predicted row closely matches the sum of the actual row, while also preserving the ratios among the six target variables within each row.
The standard "NARX" training using "trainlm" and the default "MSE" loss might not effectively optimize both sum and ratio constraints simultaneously. Also, training performance and prediction accuracy may degrade due to the high dimensionality and sequence length, especially without suitable normalization.
To help resolve the issue you can kindly consider the following improvements:
1. Normalize the Targets Row-wise Before Training: This preserves the ratios among target values.
T_mat = cell2mat(Ziel);
row_sums = sum(T_mat, 1);
T_normalized = T_mat ./ row_sums;
T = mat2cell(T_normalized, size(T_normalized, 1), ones(1, size(T_normalized, 2)));
2. Use "Bayesian Regularization" ("trainbr") Instead of "Levenberg-Marquardt" ("trainlm"): This helps in generalization and handles noisy data better
trainFcn = 'trainbr';
3. Reduce Delay Ranges for Stability and Efficiency: Instead of "[1:8]", you can use the following
inputDelays = 1:3;
feedbackDelays = 1:3;
4. Evaluate the Model with Sum and Ratio Metrics: This will give a clearer measure of performance
sum_error = mean(abs(sum(y2_rescaled, 1) - sum(T_mat, 1)));
cos_sim = dot(y2_rescaled, T_mat)./(vecnorm(y2_rescaled).*vecnorm(T_mat));
ratio_error = 1 - mean(cos_sim);
These enhancements ensure that your "NARX" network focuses both on learning the correct magnitude (via sum recovery) and internal structure (via ratio preservation). Kindly integrate these suggestions into your existing loop and test various hidden layer sizes.
For further reference, kindly refer to the following official documentation:
I hope this helps!
0 Comments
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!