Understanding Simple Narx Example
    9 views (last 30 days)
  
       Show older comments
    
    JMP Phillips
      
 on 13 Mar 2015
  
    
    
    
    
    Commented: KANCHE ANJAIAH
 on 29 Oct 2020
            I am doing the narx neural network using the simplenarx_dataset.
Supposedly it is to predict y(t) based on y(t-1),x(t-1)... etc
but after training open loop I find that output y(t) is two time steps before the target data... so I think it is predicting y(t+2). see below figure: blue - net output (y) red - target data (T)
Code is load('simplenarx_dataset'); X = simplenarxInputs; T = simplenarxTargets;
trainFcn = 'trainlm';  % Levenberg-Marquardt
% Create a Nonlinear Autoregressive Network with External Input
inputDelays = 1:2;
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize,'open',trainFcn);
% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[x,xi,ai,t] = preparets(net,X,{},T);
% Setup Division of Data for Training, Validation, Testing
% The function DIVIDERAND randomly assigns target values to training,
% validation and test sets during training.
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand';  % Divide data randomly
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
net.divideMode = 'value';  % Divide up every value
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
net.performFcn = 'mse';  % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
  'ploterrcorr', 'plotinerrcorr'};
% Train the Network
[net,tr] = train(net,x,t,xi,ai);
% Test the Network
y = net(x,xi,ai);
e = gsubtract(t,y);
performance = perform(net,t,y)

0 Comments
Accepted Answer
  Greg Heath
      
      
 on 13 Mar 2015
        1. The script obtained from nntool can be simplified by deleting all redundant statements that explicitly assign default values.
2. It is useful to use uppercase for cells and lowercase for doubles.
3. It is also useful to use the subscripts 'o' and 'c' for open and close loop, respectively.
The code you have posted calculates correctly. However, I see no plot command. That is where your mistake occurs. Remember that the outputs of preparets are shifted from the original variables.
Taking my advice into account yields
Understanding Simple Narx Example
% I am doing the narx neural network using the simplenarx_dataset. % Supposedly it is to predict y(t) based on y(t-1),x(t-1)... etc
No.
Although the feedback delays must be positive ( FD > 0) Input delays must be NONNEGATIVE ( ID >= 0)
Therefore, in general, it estimates y(t) based on x(t), x(t-1), y(t-1)... etc
% but after training open loop I find that output y(t) is two time steps % before the target data... so I think it is predicting y(t+2). see below % figure: blue - net output (y) red - target data (T)
It's all relative, i.e., whether YOU define t = 0 at the input or the output.
Changing notation and using implied defaults, your code can be simplified to
 Lower case for doubles
 Upper case for cells 
 Subscript "o" for "o"pen loop 
 Subscript "c" for "c"losed loop
 close all, clear all, clc
 [ X, T ]              = simplenarx_dataset;
 N                     = length(T)
 neto                  = narxnet;
 [ Xo, Xoi, Aoi, To ]  = preparets( neto, X, {}, T );
 to = cell2mat( To );
 MSE00o                = mean(var(to',1)) % Normalization Reference
 rng('default')                         % Added for reproducibility
 [ neto, tro, Yo, Eo, Xof, Aof ] = train( neto, Xo, To, Xoi, Aoi );
 % [ Yo Xof Aof ] = net(Xo,Xoi,Aoi); Eo  = gsubtract(To,Yo);
 NMSEo = mse(Eo)/MSE00o 
 R2o   = 1 - NMSEo           % Rsquared (see Wikipedia )
 yo = cell2mat(Yo);
 figure(1), hold on
 plot( 3:N, to, 'LineWidth', 2)
 plot( 3:N, yo, 'ro', 'LineWidth', 2)
 legend( ' TARGET ', ' OUTPUT ' )
 title( ' NARXNET EXAMPLE ' )
Hope this helps.
Thank you for formally accepting my answer
Greg
1 Comment
More Answers (0)
See Also
Categories
				Find more on Deep Learning Toolbox in Help Center and File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

