Time delay neural network
Row vector of increasing 0 or positive input delays,
Row vector of one or more hidden layer sizes,
and returns a time delay neural network.
Time delay networks are similar to feedforward networks, except that the input weight
has a tap delay line associated with it. This allows the network to have a finite dynamic
response to time series input data. This network is also similar to the distributed delay
neural network (
distdelaynet), which has delays on the layer
weights in addition to the input weight.
Partition the training set. Use
Xnew to do prediction in closed loop mode later.
[X,T] = simpleseries_dataset; Xnew = X(81:100); X = X(1:80); T = T(1:80);
Train a time delay network, and simulate it on the first 80 observations.
net = timedelaynet(1:2,10); [Xs,Xi,Ai,Ts] = preparets(net,X,T); net = train(net,Xs,Ts,Xi,Ai); view(net)
Calculate the network performance.
[Y,Xf,Af] = net(Xs,Xi,Ai); perf = perform(net,Ts,Y);
Run the prediction for 20 timesteps ahead in closed loop mode.
[netc,Xic,Aic] = closeloop(net,Xf,Af); view(netc)
y2 = netc(Xnew,Xic,Aic);
inputDelays— Input delays
[1:2](default) | row vector
Zero or positive input delays, specified as an increasing row vector.
hiddenSizes— Hidden sizes
10(default) | row vector
Sizes of the hidden layers, specified as a row vector of one or more elements.
trainFcn— Training function name
Training function name, specified as one of the following.
Scaled Conjugate Gradient
Conjugate Gradient with Powell/Beale Restarts
Fletcher-Powell Conjugate Gradient
Polak-Ribiére Conjugate Gradient
One Step Secant
Variable Learning Rate Gradient Descent
Gradient Descent with Momentum
Example: For example, you can specify the variable learning rate gradient descent
algorithm as the training algorithm as follows:
For more information on the training functions, see Train and Apply Multilayer Shallow Neural Networks and Choose a Multilayer Neural Network Training Function.