Stata Ts Introduction To Time-Series Commands
Stata Ts Introduction To Time-Series Commands
Description
The Time-Series Reference Manual organizes the commands alphabetically, which makes it easy
to find individual command entries if you know the name of the command. This overview organizes
and presents the commands conceptually, that is, according to the similarities in the functions that
they perform.
The commands listed under the heading Data management tools and time-series operators help
you prepare your data for further analysis. The commands listed under the heading Univariate time
series are grouped together because they are either estimators or filters designed for univariate time
series or pre-estimation or post-estimation commands that are conceptually related to one or more
univariate time-series estimators. The commands listed under the heading Multivariate time series are
similarly grouped together because they are either estimators designed for use with multivariate time
series or pre-estimation or post-estimation commands conceptually related to one or more multivariate
time-series estimators. Within these three broad categories, similar commands have been grouped
together.
4
time series — Introduction to time-series commands 5
Estimators
arima Autoregressive integrated moving-average models
arch Autoregressive conditional heteroskedasticity (ARCH) family of
estimators
newey Regression with Newey–West standard errors
prais Prais–Winsten regression and Cochrane–Orcutt regression
Diagnostic tools
corrgram Tabulate and graph autocorrelations
xcorr Cross-correlogram for bivariate time series
cumsp Cumulative spectral distribution
pergram Periodogram
dfgls DF-GLS unit-root test
dfuller Augmented Dickey–Fuller unit-root test
pperron Phillips–Perron unit-root test
dwstat Durbin–Watson d statistic
durbina Durbin’s alternative test for serial correlation
bgodfrey Breusch–Godfrey test for higher-order serial correlation
archlm Engle’s LM test for the presence of autoregressive conditional
heteroskedasticity
wntestb Bartlett’s periodogram-based test for white noise
wntestq Portmanteau (Q) test for white noise
6 time series — Introduction to time-series commands
Estimators
var Vector autoregression models
svar Structural vector autoregression models
varbasic Fit a simple VAR and graph impulse–response functions
vec Vector error-correction models
Diagnostic tools
varlmar Obtain LM statistics for residual autocorrelation after var or svar
varnorm Test for normally distributed disturbances after var or svar
varsoc Obtain lag-order selection statistics for VARs and VECMs
varstable Check the stability condition of VAR or SVAR estimates
varwle Obtain Wald lag-exclusion statistics after var or svar
veclmar Obtain LM statistics for residual autocorrelation after vec
vecnorm Test for normally distributed disturbances after vec
vecrank Estimate the cointegrating rank using Johansen’s framework
vecstable Check the stability condition of VECM estimates
Remarks
Estimators
The four univariate time-series estimators currently available in Stata are arima, arch, newey,
and prais. The latter two, prais and newey, are really just extensions to ordinary linear regression.
When you fit a linear regression on time-series data via ordinary least squares, if the disturbances
are autocorrelated, the parameter estimates are usually consistent, but the estimated standard errors
tend to be biased downward. A number of estimators have been developed to deal with this problem.
One strategy is to use OLS for estimating the regression parameters and use a different estimator
for the variances, one that is consistent in the presence of autocorrelated disturbances, such as the
Newey–West estimator that is implemented in newey. An alternative strategy is to attempt to model
the dynamics of the disturbances. The estimators found in prais, arima, and arch are based on
such a strategy.
prais implements two such estimators: the Prais–Winsten and the Cochrane–Orcutt GLS estimators.
These estimators are generalized least-squares estimators, but they are fairly restrictive in that they
permit only first-order autocorrelation in the disturbances. While they have certain pedagogical and
historical value, they are somewhat obsolete. Faster computers with more memory have made it
possible to implement Full Information Maximum Likelihood (FIML) estimators, such as Stata’s
arima command. These estimators permit much greater flexibility when modeling the disturbances
and are more efficient estimators.
arima provides the means to fit linear models with autoregressive moving-average (ARMA)
disturbances, or in the absence of linear predictors, autoregressive integrated moving-average (ARIMA)
8 time series — Introduction to time-series commands
models. This means that, whether you think your data are best represented as a distributed-lag model, a
transfer-function model, or a stochastic difference equation, or you simply wish to apply a Box–Jenkins
type filter to your data, the model can be fit using arima. arch, a conditional maximum likelihood
estimator, has similar modeling capabilities for the mean of the time series but can also model
autoregressive conditional heteroskedasticity in the disturbances with a wide variety of specifications
for the variance equation.
Diagnostic tools
Stata’s time-series commands also include a number of pre-estimation and post-estimation diagnostic
commands. corrgram estimates the autocorrelation function and partial autocorrelation function of
a univariate time series, as well as Q statistics. These functions and statistics are often used to
determine the appropriate model specification before fitting ARIMA models. corrgram can also be
used with wntestb and wntestq to examine the residuals after fitting a model for evidence of model
misspecification. Stata’s time-series commands also include the commands pergram and cumsp,
which provide the log-standardized periodogram and the cumulative sample spectral distribution,
respectively, for time-series analysts who prefer to estimate in the frequency domain rather than the
time domain.
xcorr estimates the cross-correlogram for bivariate time series and can similarly be used both for
pre-estimation and post-estimation. For example, the cross-correlogram can be used before fitting a
transfer-function model to produce initial estimates of the impulse–response function. This estimate
can then be used to determine the optimal lag length of the input series to include in the model
specification. It can also be used as a post-estimation tool after fitting a transfer function. The cross-
correlogram between the residual from a transfer-function model and the pre-whitened input series
of the model can be examined for evidence of model misspecification.
When fitting ARMA or ARIMA models, the dependent variable being modeled must be covariance-
stationary (ARMA models), or the order of integration must be known (ARIMA models). Stata has three
commands that can test for the presence of a unit root in a time-series variable: dfuller performs
the augmented Dickey–Fuller test, pperron performs the Phillips–Perron test, and dfgls performs
a modified Dickey–Fuller test.
The remaining diagnostic tools for univariate time-series are for use after fitting a linear model via
OLS with Stata’s regress command. They are documented collectively in [TS] regression diagnostics.
They include dwstat, durbina, bgodfrey, and archlm. dwstat computes the Durbin–Watson d
statistic to test for the presence of first-order autocorrelation in the OLS residuals. durbina likewise
tests for the presence of autocorrelation in the residuals. By comparison, however, Durbin’s alternative
test is more general and easier to use than the Durbin–Watson test. With durbina, you can test for
higher orders of autocorrelation, the assumption that the covariates in the model are strictly exogenous
is relaxed, and there is no need to consult tables to compute rejection regions, as you must with
time series — Introduction to time-series commands 9
the Durbin–Watson test. bgodfrey computes the Breusch–Godfrey test for autocorrelation in the
residuals, and while the computations are different, the test in bgodfrey is asymptotically equivalent
to the test in durbina. Finally, archlm performs Engle’s LM test for the presence of autoregressive
conditional heteroskedasticity.
Estimators
Stata provides commands for fitting the most widely applied multivariate time-series models. var
and svar fit vector autoregressions and structural vector autoregressions to stationary data. vec fits
cointegrating vector error-correction models.
Diagnostic tools
Before fitting a multivariate time-series model, you must specify the number of lags to include.
varsoc produces statistics for determining the order of a VAR, SVAR, or VECM.
Several post-estimation commands perform the most common specification analysis on a previously
fitted VAR or SVAR. You can use varlmar to check for serial correlation in the residuals, varnorm
to test the null hypothesis that the disturbances come from a multivariate normal distribution, and
varstable to see if the fitted VAR or SVAR is stable. Two common types of inference about VAR
models are whether one variable Granger-causes another and whether a set of lags can be excluded
from the model. vargranger reports Wald tests of Granger causation, and varwle reports Wald lag
exclusion tests.
Similarly, several post-estimation commands perform the most common specification analysis on a
previously fitted VECM. You can use veclmar to check for serial correlation in the residuals, vecnorm
to test the null hypothesis that the disturbances come from a multivariate normal distribution, and
vecstable to analyze the stability of the previously fitted VECM.
VARs and VECMs are frequently fit to produce baseline forecasts. fcast produces dynamic forecasts
from previously fitted VARs and VECMs.
Many researchers fit VARs, SVARs, and VECMs because they want to analyze how unexpected
shocks affect the dynamic paths of the variables. Stata has a suite of irf commands for estimating
IRF functions and interpreting, presenting, and managing these estimates; see [TS] irf.
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lütkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2d ed. New York: Springer.
Pisati, M. 2001. sg162: Tools for spatial data analysis. Stata Technical Bulletin 60: 21–37. Reprinted in Stata Technical
Bulletin Reprints, vol. 10, pp. 277–298. College Station, TX: Stata Press.
Stock, J. H. and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15(4): 101–115.
Also See
Complementary: [U] 1.3 What’s new
Background: [R] intro