L4_Modeling_Cycles
L4_Modeling_Cycles
Semester 1 AY 2024/2025
1
The Cycle Component
E(Yt+h |Ωt ) = Tt + St + Ct
I Trend
I Seasonal
I Cycle
• In this lecture, we will talk about how we think about the last
(and the most complicated) component: Cycle.
• “Cycle” is whatever persistent dynamics that remain after
accounting for trend and seasonality.
2
The Time Series Processes
3
Mean and Variance Stationarity
E(Yt ) = µ, ∀t.
I Counter-example: A time series with a linear trend is not mean
stationary since its mean depends on time.
• A time series Yt is variance stationary if
Var(Yt ) = σ 2 , ∀t.
I Counter-example: a series with variance that trends (increases)
with time is not variance stationary.
• Thus, we assume Ct is both mean and variance stationary.
4
Autocovariance
5
Autocorrelation
6
Covariance Stationarity
7
Covariance Stationarity: Summary
If Yt is covariance stationary, then
9
Cov. Stationary Process Example: US SA
Unemp. Rate 2/4
Figure 2: Monthly US Unemployment Rate vs. Its 4 Lags
10
Cov. Stationary Process Example: US SA
Unemp. Rate 3/4
Autocorrelations
11
Cov. Stationary Process Example: US SA
Unemp. Rate 4/4
Autocorrelation Plot (Correlogram)
12
What is “Strong” Stationarity Then?
13
White Noise
• A (zero-mean) white noise process has zero autocorrelations:
ρ(k) = 0 for k > 0.
I For example,
Yt = t , t ∼ (0, σ 2 )
where i is uncorrelated over time, i.e., Cov(t , s ) = 0, for any
t and s.∗ We can also write: Yt ∼ WN(0, σ 2 )
• Since a WN process is serially uncorrelated, it is linearly unfore-
castable.
• Gaussian white noise is an important special case
t ∼ N(0, σ 2 ).
14
Gaussian White Noise Simulation
15
WN Process Example: S&P Stock Return
16
Ergodicity 1/4
• We think of the time series expectation as an ensemble
average:
I
1X (i)
E(Yt ) = plimI→∞ Y ,
I i=1 t
(i)
where Yt is the observation at date t from sequence i.
17
Ergodicity 2/4
18
Ergodicity 3/4
19
Ergodicity 4/4
20
Unemployment Insurance Claims: SA vs. NSA
21
Autocorrelation with Geometric Decay
22
US SA Unemployment Rate
23
Negative Autocorrelation
24
Change in SA Unemployment Insurance Claims
25
Autocorrelation with Slow Decay
26
S&P 500 Absolute Returns example
27
Estimation: Mean and Autocovariance
28
Estimation: Autocorrelations
γ̂(k)
ρ̂(k) =
γ̂(0)
29
Confidence Bands for Autocorrelations 1/2
30
Confidence Bands for Autocorrelations 2/2
√ √
• If sample autocorrelations are all within [−2/ T , 2/ T ],
then Yt is likely white noise.
• Otherwise, examine Bartlett bands as an approximation.
I It is important to note that the interval is pointwise, i.e.,
constructed for each individual ρ(k) estimate (i.e., not a joint
test).
• Stata reports Bartlett bands as a shaded region. The
interpretation is if the estimate falls outside the shaded
region, then it is significantly different from 0.
31
Joint Tests for Autocorrelations 1/2
• Often we are interested in knowing whether all autocorrelations
(up to, say, m < k in practice) are jointly zero.
• Recall that ρ̂(k) ∼ N(0, 1/T ). Thus, T ρ̂2 (k) ∼ χ21 .
• Also, it can shown that the autocorrelations at various displace-
ment are approximately independent of one another.
• Box-Pierce Q-statistic:
m
QBP = T ρ̂2 (i) ∼ χ2m
X
i=1
• Ljung-Box Q-statistic:
m
1
QLB = T (T + 2) ρ̂2 (i) ∼ χ2m
X
i=1
(T − i)
• These are the so-called Portmanteau tests.
• The distributions above are under H0 : Yt is white noise.
• Stata reports the QLB when the corrgram command is exe-
32
cuted.
Joint Tests for Autocorrelations 2/2
33
Lags and the Lag Operator Notation
• Recall that we call Yt−1 the first lag, Yt−2 the second lag, etc.
• The lag operator L is a useful way to manipulate lags.
• It is defined by the relation: LYt = Yt−1 .
• Taking it to a power means iterative application:
34
Remarks 1: Lag Operator in Stata
35
Wold’s Theorem
Theorem 1
where b0 = 1 and
P∞ 2 < ∞.
i=0 bi
36
Wold’s Theorem: Innovations
37
Wold Representation: Moments
• Unconditional moments (easy to derive):
E(Yt ) = 0
∞
Var(Yt ) = E(Yt2 ) = σ 2 bi2
X
i=0
• Conditional moments (assuming t independent):
∞
E(Yt | Ωt−1 ) = bi t−i ;
X
i=1
Var(Yt | Ωt−1 ) = E([Yt − E(Yt | Ωt−1 )]2 | Ωt−1 )
= E(2t | Ωt−1 ) = E(2t ) = σ 2 ,
where Ωt−1 = {t−1 , t−2 , . . .}
• Conditional mean moves over time in response to the
information set. This is particularly important for forecasting
(why?).
38
Modeling Cycles
39
Moving Average (MA) Processes
40
MA(1) Examples
41
Mean and Variance of MA(1)
42
Conditional Mean and Variance of MA(1)
• The conditional mean:
E(Yt | Ωt−1 ) = E(t + θt−1 | Ωt−1 )
= E(t | Ωt−1 ) + θE(t−1 | Ωt−1 )
= θt−1
• The conditional variance:
Var(Yt | Ωt−1 ) = Var(t + θt−1 | Ωt−1 )
= Var(t | Ωt−1 ) + Var(θt−1 | Ωt−1 )
= σ2
• θt−1 is the best forecast for Yt under squared loss. The
optimal forecast error then is t .
• The conditional variance, the innovation variance, and the
1-step forecast variance are the same.
43
Autocovariance of MA(1)
• First autocovariance:
γ(1) = E(Yt Yt−1 )
= E((t + θt−1 )(t−1 + θt−2 ))
= E(t t−1 ) + θE(2t−1 ) + θE(t t−2 ) + θ2 E(t−1 t−2 )
= θσ 2
• Autocovariances for k > 1:
γ(k) = E(Yt Yt−k )
= E((t + θt−1 )(t−k + θt−k−1 ))
= E(t t−k ) + θE(t−1 t−k ) + θE(t t−k−1 )
+ θ2 E(t−1 t−k−1 )
=0
• The autocovariance function is zero for all k > 1.
44
Autocorrelation of MA(1)
• First autocorrelation:
γ(1) θσ 2 θ
ρ(1) = = 2 =
γ(0) σ (1 + θ2 ) (1 + θ2 )
45
Stationarity and Invertibility of MA(1)
46
Inversion of MA(1) 1/3
47
Inversion of MA(1) 2/3
i=1
(1 + θL)−1 Yt = t
48
Inversion of MA(1) 3/3
(1 − bL)−1 = lim (1 + bL + b 2 L2 + b 3 L3 + · · · + b j Lj )
j→∞
• Here we have:
(1 + θL)−1 = (1 − (−θL))−1 = 1 − θL + θ2 L2 − θ3 L3 + . . .
49
Invertible MA Processes
50
MA(q) Processes
Yt = (1 + θ1 L + θ2 L2 + · · · + θq Lq )t = Θ(L)t
51
MA(q) Process vs. Wold Representation
• Wold:
∞
Yt = B(L)t = bi t−i , b0 = 1, t ∼ WN(0, σ 2 )
X
i=0
• MA(q):
q
Yt = Θ(L)t = θi t−i , θ0 = 1, t ∼ WN(0, σ 2 )
X
i=0
52
Example: Quarterly Consumption Growth
53
MA(2) Model Estimation
• In Stata, we can use the arima command to estimate the
MA(2) by nonlinear optimization.
Another syntax: arima consgr, ma(1/2). Note that arima consgr, ma(2) will
suppress the first term. 54
MA(2) Estimation: Nonlinear Least Square
54
MA(2) Model Estimation
------------------------------------------------------------------------------
| OPG
consgr | Coefficient std. err. z P>|z| [95% conf. interval]
-------------+----------------------------------------------------------------
consgr |
_cons | 3.322972 .319276 10.41 0.000 2.697202 3.948741
-------------+----------------------------------------------------------------
ARMA |
ma |
L1. | -.0886977 .0172324 -5.15 0.000 -.1224725 -.0549229
L2. | .176492 .0345969 5.10 0.000 .1086833 .2443007
-------------+----------------------------------------------------------------
/sigma | 4.304923 .0539136 79.85 0.000 4.199254 4.410591
------------------------------------------------------------------------------
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
55
Autoregressive (AR) Processes
Yt = φYt−1 + t , t ∼ WN(0, σ 2 )
(1 − φL)Yt = t
56
AR Process Examples
57
Inversion of AR(1)
Yt = φYt−1 + t
= t + φ(φYt−2 + t−1 )
∞
= t + φt−1 + φ2 t−2 + · · · =
X
φi t−i
i=0
58
Mean and Variance of AR(1)
59
Alternative Variance Computation 1/2
• Apply the variance operator on both sides:
Var(Yt ) = Var(φYt−1 + t )
which implies
Var(Yt ) = φ2 Var(Yt ) + σ 2
σ2
Var(Yt ) =
1 − φ2
• If φ = 1 then Var(Yt ) is infinite. This is inconsistent with
covariance stationarity.
60
Alternative Variance Computation 2/2
61
Remarks 2: Detour:Random Walk / Unit Root
Yt = Yt−1 + t
• Conditional mean:
• Conditional variance:
64
Autocovariance of AR(1) 1/2
• Take the original equation:
Yt = φYt−1 + t
γ(k) = φγ(k − 1)
• This is called the Yule-Walker equation.
• We can recursively work out autocovariances: just need to
know γ(0).
65
Autocovariance of AR(1) 2/2
σ2
γ(1) = φγ(0) = φ ,
1 − φ2
σ2
γ(2) = φγ(1) = φ2
1 − φ2
..
.
σ2
γ(k) = φγ(k − 1) = φk .
1 − φ2
ρ(k) = φk , k = 0, 1, 2 . . .
66
Autocorrelation of AR(1)
ρ(k) = φk , k = 0, 1, 2 . . .
67
Example: AR(1) for Unemployment Rate
------------------------------------------------------------------------------
UNRATE | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
UNRATE |
L1. | .9694779 .0080172 120.93 0.000 .9537437 .985212
|
_cons | .1747403 .0476544 3.67 0.000 .0812158 .2682648
------------------------------------------------------------------------------
• Note estimation by OLS (more on this later), very high persistence of the unemployment rate.
68
Example: AR(1) for Unemployment Rate
69
AR(p) Processes
70
AR(1)/(p) Processes vs. Wold Representation
• Wold:
∞
Yt = B(L)t = bi t−i , b0 = 1, t ∼ WN(0, σ 2 )
X
i=0
• AR(1):
∞
Yt = (1 − φL)−1 t = φi t−i , t ∼ WN(0, σ 2 )
X
i=0
• Need |φ| < 1 for stationarity and |θ| < 1 for invertibility.
72
ARMA(p,q) Processes
• ARMA(p,q) generalization:
Φ(L)Yt = Θ(L)t
which implies
Θ(L)
Yt = t
Φ(L)
• Need all roots of AR polynomial outside the unit circle for
stationarity and all roots of the MA polynomial outside the
unit circle for invertibility.
73
Summary: ARMA Models
74
Summary
75
Before you leave...
76
Appendix
77
Useful Stata Commands
• lag 1
L.var
Lm.var
L(1/m).var
78
• Plot autocorrelations and confidence bands for k lags (if we
don’t specify, the default is 40).
ac varname, lag(k)
79