neural network, narxnet, multi-step prediction

5 views (last 30 days)
Dear all,
I have a question and hope to hear some advices from you.
Given a narxnet model, I would like to use it to predict the output corresponding to a new input knowing ONLY the initial condition of the output . It's different from what I've seen so far when one is given the past values of both input and output (I made a search already, hope that I did not miss important things).
My question is: is it correct if I do as in the following (please look at the part where I define T2 and make prediction with closeloop. The outcome is depicted in the attached pdf, can also be obtained by running the code. You will see an imperfection during the first ten steps when the prediction by narxnet differs from the actual output)? If it is not correct, can you please give me a hint about what to improve? Any advice is welcome!!!
the code is as follows (thanks to Greg for commenting, hope that it's clearer now)
clear all; clc;
[X, T]= simplenarx_dataset;
net=narxnet();
% set all weights and biases equal to zero
net = setwb(net,zeros(1,net.numWeightElements));
% Prepare the Data for Training and Simulation
[x,xi,ai,t] = preparets(net,X,{},T);
% Train the Network
[net,tr] = train(net,x,t,xi,ai);
% Test the Network
y = net(x,xi,ai);
performance = perform(net,t,y)
view(net)
Closed Loop Network Use this network to do multi-step prediction.
netc = closeloop(net);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%QUESTION IS IN THE FOLLOWING PART
% PEOPLE USUALLY DO THIS, HOWEVER, T IS NOT KNOWN A PRIORI
% [xc,xic,aic,tc] = preparets(netc,X,{},T);
% IS IT CORRECT TO DO AS FOLLOWS???
% multi-step prediction knowing input and initial condition of output. The
% input is taken as the one used for training the network.
% The output T2{1} = T{1}, T2{i>1}=0;
T2=T;
for i=2:length(T)
T2{i}=0;
end
[xc,xic,aic,tc] = preparets(netc,X,{},T2);
yc = netc(xc,xic,aic);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%PLOT FIGURE
%%compare actual output vs. multi-step prediction knowing input and initial condition of output
figure();
plot(cell2mat(t),'-r','linewidth',2);
hold on;
plot(cell2mat(yc),'-.b','linewidth',1.5);
legend('actual output','multi-step prediction knowing input and initial condition of output');
xlabel('time step')
ylabel('output');
  3 Comments
chu
chu on 2 Dec 2014
Thanks Greg, I improved my question. Hope that it's better now.
Greg Heath
Greg Heath on 20 Sep 2015
Edited: Greg Heath on 20 Sep 2015
1. RNG not initialized: Get different answers each time
2. divideFcn is 'dividerand'. Therefore training validation and test data are all nonuniformly spread along the whole timespan. This tends to knock the wind out of the sails of your input/IC demo.
3. What happens if only train OR only val OR only test inputs are used in the demo?
4. Have you tried this on tougher datasets like MAGLEV?
5. Regardless, interesting idea.
Hope this helps.
Greg

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 20 Sep 2015
SURPRISE!!! ... IT TURNS OUT THAT GOOD CL PREDICTION IS OBTAINABLE WITH JUST THE INPUT AND ZERO INITIAL CONDITIONS! THE ONLY DIFFERENCE IS THE TIME IT TAKES FOR THE ERROR TO BECOME NEGLIGIBLE.
close all, clear all, clc
[ X, T ] = simplenarx_dataset;
neto = narxnet;
[ Xo, Xoi, Aoi, To ] = preparets( neto, X, {},T );
to = cell2mat(To);
varto = var(to,1) % 0.099154
[neto tro Yo Eo Xof Aof ] = train(neto,Xo,To,Xoi,Aoi );
view(neto)
NMSEo = mse(Eo)/varto % 2.2159e-08
netc = closeloop(neto);
T2 = T; T2(1:end) = {0};
[ Xc, Xci, Aci, Tc ] = preparets( netc, X, {}, T2 );
whos Xc Xci Aci Tc
% Aci 2x2 624 cell
% Tc 1x98 11760 cell
% Xc 1x98 11760 cell
% Xci 1x2 240 cell
[ Yc Xcf Acf ] = netc(Xc, Xci, Aci );
yc = cell2mat(Yc);
nerrc = (to-yc)/sqrt(varto);
figure;
subplot(211), hold on;
plot( to, 'b', 'linewidth', 3);
plot( yc, 'r--', 'linewidth',2);
legend( 'TARGET', 'ZERO-IC PREDICTIONS' )
xlabel( 'TIME' )
ylabel( 'OUTPUT' );
title(' ZERO INITIAL CONDITION PREDICTIONS')
subplot(212)
plot(nerrc, 'k' ,'linewidth',3);
legend( ' ZERO-IC PREDICTION ERROR ' )
xlabel( ' TIME ' )
ylabel('NORMALIZED ERROR');
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Comment
moncef soualhi
moncef soualhi on 2 Dec 2019
Hi,
when using T2 = T; T2(1:end) = {0};, the obtained results are not well fitted, the amplitude are to small comparing with the test targets.
Is there how to explain why the network use the testY targets to predict, it seems have no sens?
Thanks

Sign in to comment.

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!