## VAR Model Forecasting, Simulation, and Analysis

### VAR Model Forecasting

When you have models with parameters (known or estimated), you can examine the predictions of the models. For information on creating VAR models, see Vector Autoregression (VAR) Model Creation. For information on estimating models, see VAR Model Estimation.

This list describes the main forecasting methods.

These functions base their forecasts on a fully specified model object and initial data. The functions differ in their innovations processes:

• forecast assumes zero-valued innovations. Therefore, forecast yields a deterministic forecast, conditional or otherwise.

• simulate assumes the multivariate innovations are jointly Gaussian distributed with covariance matrix Σ. simulate yields pseudorandom, Monte Carlo sample paths.

• filter requires innovations process paths. filter yields a sample path that is deterministically based on the specified innovations process paths.

forecast is faster and requires less memory than generating many sample paths using simulate or filter. However, forecast is not as flexible as simulate and filter. For example, suppose you transform some time series before making a model, and want to undo the transformation when examining forecasts. The error bounds given by transforms of forecast error bounds are not valid bounds. In contrast, the error bounds given by the statistics of transformed simulations are valid.

#### How Functions for forecasting Work

For unconditional forecasting, forecast generates two quantities:

• A deterministic forecast time series based on 0 innovations

• Time series of forecast mean square error matrices based on the Σ, the innovations covariance matrix.

For conditional forecasting:

• forecast requires an array of future response data that contains a mix of missing (NaN) and known values. forecast generates forecasts for the missing values conditional on the known values.

• The forecasts generated by forecast are also deterministic, but the mean square error matrices are based on Σ and the known response values in the forecast horizon.

• forecast uses the Kalman filter to generate forecasts. Specifically:

1. forecast represents the VAR model as a state-space model (ssm model object) without observation error.

2. forecast filters the forecast data through the state-space model. That is, at period t in the forecast horizon, any unknown response is

${\stackrel{^}{y}}_{t}={\stackrel{^}{\Phi }}_{1}{\stackrel{^}{y}}_{t-1}+...+{\stackrel{^}{\Phi }}_{p}{\stackrel{^}{y}}_{t-p}+\stackrel{^}{c}+\stackrel{^}{\delta }t+\stackrel{^}{\beta }{x}_{t},$

where ${\stackrel{^}{y}}_{s},$ s < t, is the filtered estimate of y from period s in the forecast horizon. forecast uses presample values for periods before the forecast horizon.

For more details, see filter and [4], pp. 612 and 615.

For either type of forecast, To initialize the VAR(p) model in the forecast horizon, forecast requires p presample observations. You can optionally specify more than one path of presample data. If you do specify multiple paths, forecast returns a three-dimensional array of forecasted responses, with each page corresponding to a path of presample values.

For unconditional simulation, simulate:

1. Generates random time series based on the model using random paths of multivariate Gaussian innovations distributed with a mean of zero and a covariance of Σ

2. Filters the random paths of innovations through the model

For conditional simulation:

• simulate, like forecast, requires an array of future response data that contains a mix of missing and known values, and generates values for the missing responses.

• simulate performs conditional simulation using this process. At each time t in the forecast horizon:

1. simulate infers (or, inverse filters) the innovations (E(t,:)) from the known future responses.

2. For missing future innovations, simulate:

1. Draws Z1, which is the random, standard Gaussian distribution disturbances conditional on the known elements of E(t,:).

2. Scales Z1 by the lower triangular Cholesky factor of the conditional covariance matrix. That is, Z2 = L*Z1, where L = chol(Covariance,'lower') and Covariance is the covariance of the conditional Gaussian distribution.

3. Imputes Z2 in place of the corresponding missing values in E(t,:).

3. For the missing values in the future response data, simulate filters the corresponding random innovations through the VAR model Mdl.

For either type of simulation:

• simulate does not require presample observations. For details on the default values of the presample data, see 'Y0'.

• To carry out inference, generate 1000s of response paths, and then estimate sample statistics from the generated paths at each time in the forecast horizon. For example, suppose Y is a three-dimensional array of forecasted paths. Monte Carlo point and interval estimates of the forecast at time t in the forecast horizon is

MCPointEst = mean(Y(t,:,:),3);
MCPointInterval = quantile(Y(t,:,:),[0.025 0.975],3);

That is, the Monte Carlo point estimate is the mean across pages and the Monte Carlo interval estimate is composed of the 2.5th and the 97.5th percentiles computed across paths. Observe that Monte Carlo estimates are subject to Monte Carlo error, and so estimates differ each time you run the analysis under the same conditions, but using a different random number seed.

### Data Scaling

If you scaled any time series before fitting a model, you can unscale the resulting time series to understand its predictions more easily.

• If you scaled a series with log, transform predictions of the corresponding model with exp.

• If you scaled a series with diff(log) or, equivalently, price2ret, transform predictions of the corresponding model with cumsum(exp), or, equivalently, ret2price. cumsum is the inverse of diff; it calculates cumulative sums. As in integration, you must choose an appropriate additive constant for the cumulative sum. For example, take the log of the final entry in the corresponding data series, and use it as the first term in the series before applying cumsum.

### Calculating Impulse Responses

You can examine the effect of impulse responses to models with armairf. An impulse response is the deterministic response of a time series model to an innovations process that has the value of one standard deviation in one component at the initial time, and zeros in all other components and times. The main component of the impulse response function are the dynamic multipliers, that is, the coefficients of the VMA representation of the VAR model. For more details, see Impulse Response Function.

Given a fully specified varm model, you must supply the autoregression coefficients to armairf. By default, armairf sends a unit shock through the system, which results in the forecast error impulse response. You can optionally supply the innovations covariance matrix and choose whether to generate generalized or orthogonalized impulse responses. Generalized impulse responses amount to filtering a shock of one standard error of each innovation though the VAR model. Orthogonalized impulse responses scale the dynamic multipliers by the lower triangular Cholesky factor of the innovations covariance. For more details, see [2].

For an example, see Generate VAR Model Impulse Responses.

## References

[1] Lütkepohl, H. New Introduction to Multiple Time Series Analysis. Berlin: Springer, 2005.

[2] Pesaran, H. H. and Y. Shin. “Generalized Impulse Response Analysis in Linear Multivariate Models.” Economic Letters. Vol. 58, 1998, 17–29.