0% found this document useful (0 votes)
11 views

29 Autocorrelation II

econometrics

Uploaded by

Taimur Shahid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

29 Autocorrelation II

econometrics

Uploaded by

Taimur Shahid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

12/17/2022

Fundamentals of
Econometrics

Previous Two Weeks

1
12/17/2022

Week-13

Week-13

2
12/17/2022

Previous Lecture
▪ Revision and Link of Basic Concepts
✓ Covariance & Correlation and their measurements
✓ Autocovariance & Autocorrelation and their measurements
▪ Causes of Autocorrelation

Background

3
12/17/2022

Background
▪ In statistics univariate summaries refers to Mean, Median, Mode, S.D,
Mean Deviation, Variance etc.
▪ …Bivariate summaries are required when there are two paired observations
(e.g., vehicle prices and their mileage per gallon) & interest is beyond
“univariate summaries”
▪ Such as ….how these two variables are linked …i.e., if one variable
increase/ decrease….what happens to other variable?
▪ A statistic that indicates how the two variables “co-vary” is called
“Covariance” and is given as:
1
▪ 𝐶𝑂𝑉𝑋,𝑌 =
𝑛−1
σ𝑛𝑖=1(𝑋𝑖 − 𝑋)(𝑌
ത 𝑖 − 𝑌)ത

Background (cont.…)
▪ E.g., if prices of vehicles (in 000s US$) and their mileage per gallon
(miles per gallon) are used and we calculate the covariance equal to -20
thousands of dollar-miles per gallon
▪ …it indicates that the two variables are negatively linked
▪ …however, it is difficult to interpret -20 thousands of dollar-miles per
gallon (quantitative Interpretation difficulty)
▪ …furthermore, the change in scale do influence the value of the
covariance
▪ This problems of Covariance are solved by making the Covariance
unitless measure.

4
12/17/2022

Background (cont.…)
▪ As we saw from the formula of Covariance, that it was calculated as the
product of mean deviations of the two variable
▪ To make it unit less, Covariance is divide by the product of standard
deviations (SX, SY) of the same two variables…due to which the units of
numerators and denominator cancel out…we get dimensionless
number/ratio
▪ ….that is called correlation coefficient,
𝑛
represented by “r” and is given as:
𝐶𝑜𝑣𝑋,𝑌 ത 𝑖 − 𝑌)
σ𝑖=1(𝑋𝑖 − 𝑋)(𝑌 ത
𝑟𝑋,𝑌 = =
𝑆𝑋 𝑆𝑌 ത 2 σ(𝑌𝑖 − 𝑌)
σ(𝑋𝑖 − 𝑋) ത 2

✓ Correlation coefficient range between – 1 and +1 regardless of the unit of measurement


✓ When “r” value is +1 its called “Perfectly positive correlation”
✓ When “r” value is – 1 its called “Perfectly negative correlation”
✓ When “r” value is 0, its case of “no-correlation”
✓ Remember, correlation only measure the co-moment and linear correlation between two variables …does not
shows the causation between variables
9

Autocovariance and Autocorrelation


▪ Autocovariance/Autocorrelation is same as Covariance/Correlation except
instead of two different variables it measures the co-moment for a single
“Time Series”
▪ We will focus on autocorrelation as its better measure compared to
Autocovariance as was the case for correlation compared to covariance
▪ Autocorrelation is the correlation between a variable lagged one or more
time periods and itself
▪ E.g., consider relationship of a time series Yt (observations in time t), with
another time series Yt-1 (observations of lag time period), this is called
Autocorrelation of Order 1.
▪ Similarly it could be Yt and Yt-2, or Yt-3 and so on ….i.e., the autocorrelation
of order 2, order 3 or higher order

10

5
12/17/2022

Measurement of Autocorrelation
▪ For a time series (Yt) Autocorrelation can be calculated as follows:

σ𝑛𝑡=𝑘+1(𝑌𝑡 − 𝑌)(𝑌
ത 𝑡−𝑘 − 𝑌) ത
𝜌𝑘 =
ത 2
σ(𝑌𝑡 − 𝑌)
▪ 𝜌𝑘 value like correlation coefficients ranges between +1and -1 for perfect positive
and perfect negative autocorrelation/Serial Correlation*.
▪ Positive autocorrelation(negative autocorrelation) is the case when errors/time
series in one time period are positively correlated(negatively correlated) with the
same errors/time series in the other time period.
▪ Time series data patterns (Randomness, seasonality, trends etc.) can be studied
using Autocorrelation Coefficients
▪ The patterns can be identified using autocorrelation coefficient at different time
lags
*Although there is slight difference between autocorrelation and serial correlation, for this course we will treat them same and
will use it for correlation between succussive values of error terms.

11

Lag-Values of a Variable

• You can
calculate Lag
values manually
as shown

• Software such as
EVIEWS you do
have an option to
generate lag
values directly

12

6
12/17/2022

Announcements
▪ Your Project is due at 17:00 hours on Wednesday, December 28, 2022

▪ Quiz 3 WILL be on Friday December 16, 2022, during class.


▪ Course: EViews till Heteroscadascity

13

CHAPTER 6
Regression Diagnostic III: Autocorrelation

14

7
12/17/2022

Classical Linear Regression Model:


Assumptions (Old Slide)
▪ A-1: Model is linear in the parameters.
▪ A-2: Regressors (Xs) are fixed or non-stochastic.
▪ A-3: Given X, the expected value of the error term is zero, or E(ui
|X) = 0.
▪ A-4: Homoscedastic, or constant, variance of ui, or var(ui|X)= σ2
▪ A-5: No autocorrelation, or cov(ui,uj|X) = 0, i ≠ j.
▪ A-6: No multicollinearity, or no perfect linear relationships among
the X variables.
▪ A-7: No specification bias.

15

Autocorrelation and Classical Linear


Regression Model (CLRM)
▪ One of the assumptions of the classical linear regression (CLRM)
is that the covariance between ui, the error term for observation i,
and uj, the error term for observation j, is zero i.e., cov(ui,uj|X) =
0, i ≠ j

▪ If this assumption of CLRM/OLS is violated we call it an issue


of Autocorrelation

16

8
12/17/2022

Causes of Autocorrelation

17

Causes of Autocorrelation (Summary)

1. Inertia
2. Specification Bias (Excluded variable/incorrect functional form)
3. Cobweb Phenomenon
4. Lags
5. Manipulation of Data
6. Nonstationarity

18

9
12/17/2022

Causes of Autocorrelation: Inertia


▪ A salient feature of most economic time series is inertia, or sluggishness.
▪ E.g., time series such as GNP, price indexes, production, employment, and
unemployment exhibit (business) cycles.
▪ …when economy start recovery from a recession
▪ … most of these series start moving upward
▪ In this upswing, the value of a series at one point in time is greater than its
previous value.
▪ Thus there is a “momentum’’ built into them, and it continues until something
happens (e.g., increase in interest rate or taxes or both) to slow them down.
▪ Therefore, in regressions involving time series data, successive observations
are likely to be interdependent

19

Causes of Autocorrelation: Specification Bias


Excluded Variable Case
▪ True Model: 𝑌𝑡 = 𝐵1 + 𝐵2 𝑋2𝑡 + 𝐵3 𝑋3𝑡 + 𝐵4 𝑋4𝑡 + 𝑢𝑡
▪ Estimated Model: 𝑌𝑡 = 𝐵1 + 𝐵2 𝑋2𝑡 + 𝐵3 𝑋3𝑡 + 𝑢𝑡
▪ Estimating the second equation implies that the error terms of X4t will be correlated
with error term i.e., 𝑢𝑡 = 𝐵4 𝑋4𝑡 + 𝑣𝑡
… and thus will result in an autocorrelation
Incorrect Functional Form
▪ True Model: 𝑌𝑡 = 𝐵1 + 𝐵2 𝑋2𝑡 + 𝐵3 𝑋2𝑡 2
+ 𝑢𝑡
▪ Estimated Model: 𝑌𝑡 = 𝐵1 + 𝐵2 𝑋2𝑡 + 𝑢𝑡
▪ Estimating the second equation implies that the error terms of 𝑋2𝑡2
will be correlated
with error term i.e.,
2
𝑢𝑡 = 𝐵2 𝑋2𝑡 + 𝑣𝑡
…and thus will result in an autocorrelation

20

10
12/17/2022

Causes of Autocorrelation: Cobweb Phenomenon


• Time 0: In Disequilibrium Market, Q1
Price (P)

output is Demanded, at High Price P1


S • Time Period 1…Farmers expect higher
prices…so they increase the Output to Q2
P1 …but at this level of Output… Price Falls
P3 to P2 when they try to sell their product due
to access supply
• As this process repeats itself i.e. between
periods of low supply with high prices and
then high supply with low prices, the price
P2 and quantity trace out a spiral.
• The economy converges to the equilibrium
D
where supply and demand intersect.
Q1 Q3 Q2 •
Quantity (Q) There is divergent model too. However,
these models itself is not the focus of the
Today Lecture or this course.
21

Causes of Autocorrelation: Cobweb Phenomenon


▪ Consider a farmer, at the beginning of this Suppose there was high prices due to low supply
S
year’s planting of crops, farmers are Price (P)
influenced by the price prevailing last year. 2

(this phenomena is called cobweb


phenomenon) Pe 1 3

▪ In agricultural market, the supply reacts to


price with a lag of one time period because
4
supply decisions take time to implement. D

▪ This is also one of the reason for 1. Low supply causes rise in price
Quantity (Q)

2. Rise in price cause high supply


Autocorrelation
3. High supply Cause fall in prices

4. Low prices causes fall in supply

22

11
12/17/2022

Causes of Autocorrelation: Lags


➢ Consider the following model
➢ 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑡 = 𝐵1 + 𝐵2 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑡−1 + 𝑢𝑡
➢ The above equation is known as autoregression because one of the
explanatory variables is the lagged value of the dependent variable.
➢ If you neglect the lagged the resulting error term will reflect a
systematic pattern due to the influence of lagged consumption on
current consumption.

23

Causes of Autocorrelation: Manipulation of Data


▪ Different kind of manipulation may cause autocorrelation. For
example.
✓In empirical analysis, the raw data are often manipulated e.g.,
converting month to quarter or quarter to yearly data
✓Another source of manipulation is interpolation or
extrapolation of data.

24

12
12/17/2022

Causes of Autocorrelation: Nonstationarity


▪ A time series is stationary if its characteristics (e.g. mean, variance
and covariance) are time invariant; that is, they do not change over
time.
▪ When dealing with time series data, we should check whether the
given time series is stationary.
▪ Nonstationarity time series may have problem of autocorrelation

25

Consequences of
Autocorrelation

26

13
12/17/2022

Consequences
If autocorrelation exists, several consequences follow on:
✓ The OLS estimators are still unbiased and consistent.
✓ They are still normally distributed in large samples.
✓ They are no longer efficient, meaning that they are no longer BLUE.
✓ In most cases standard errors are underestimated.
✓ Thus, the hypothesis-testing procedure becomes suspect, since the estimated standard
errors may not be reliable, even asymptotically (i.e., in large samples).
✓ You may find to have a spurious regression if the dependent and independent variable
grow with time

27

Spurious Regression
▪ If Dependent and Independent variable moves together in time, you may end up
with spurious regression
▪ The major problem is that a strong autocorrelation may link two unrelated
variables strongly correlated
▪ As a result we can get a best fit of the model although practically there is no
link between the two variables
▪ Such regression are called spurious regression
▪ Next few slides includes some interesting spurious correlation...if you run
regression from it, it will give you best fit, yet they are not correlated at all

28

14
12/17/2022

29

30

15
12/17/2022

31

32

16
12/17/2022

Spurious Regression: Examples

33

Fundamentals of
Econometrics

34

17
12/17/2022

Week-13

35

CHAPTER 6
Regression Diagnostic III: Autocorrelation

36

18
12/17/2022

Week-13

37

Autocorrelation and Classical Linear


Regression Model (CLRM)
▪ One of the assumptions of the classical linear regression (CLRM)
is that the covariance between ui, the error term for observation i,
and uj, the error term for observation j, is zero i.e., cov(ui,uj|X) =
0, i ≠ j

▪ If this assumption of CLRM/OLS is violated we call it an issue


of Autocorrelation

38

19
12/17/2022

Causes of Autocorrelation (Summary)

1. Inertia
2. Specification Bias
3. Cobweb Phenomenon
4. Lags
5. Manipulation of Data
6. Nonstationarity

39

Consequences
If autocorrelation exists, several consequences follow on:
✓ The OLS estimators are still unbiased and consistent.
✓ They are still normally distributed in large samples.
✓ They are no longer efficient, meaning that they are no longer BLUE.
✓ In most cases standard errors are underestimated.
✓ Thus, the hypothesis-testing procedure becomes suspect, since the estimated standard
errors may not be reliable, even asymptotically (i.e., in large samples).
✓ You may find to have a spurious regression if the dependent and independent variable
grow with time

40

20
12/17/2022

We Are Going To
RUN a Regression Model
before Proceeding Further
to have a
Case Study

41

Practical Tasks 13.1


Autocorrelation using Eviews
▪ Download data file Table_ Autocorrelation from your LMS
▪ The DATA covers USA for period 1947-2000 on the following
variables
▪ Consum: real consumption expenditure (Dependent)
▪ DPI: real disposable personal income
▪ W: real wealth
▪ R: real interest rate
“real” means all variables are adjusted for price changes/inflation.
Run the following Regression Model
▪ 𝑙𝑛𝐶𝑡 = 𝐵1 + 𝐵2 𝑙𝑛𝐷𝑃𝐼𝑡 + 𝐵3 𝑙𝑛𝑊𝑡 + 𝐵4 𝑅𝑡 + 𝑢𝑡
▪ Save the results for discussion
42

21
12/17/2022

Practical Tasks 13.1


Autocorrelation using Eviews: Results
▪ The coefficients had expected signs.
▪ Assuming that all assumptions of
OLS holds, then it is an excellent fit
with “highly” statistically significant
coefficients, high R2, and F-statistics
▪ Interpretation (?)
▪ But it is …. TIME SERIES data
▪ …CHECK for autocorrelation!!!!
▪ … in presences of autocorrelation,
the estimated standard errors and
Subsequently, the estimated t values ▪ So we will check for the presence
can not be trusted of autocorrelation before trusting
our results
43

Detection of Autocorrelation

44

22
12/17/2022

Detection of Autocorrelation &


Application in Eviews
1. Graphical method and its Application
2. Run Test
3. Durbin-Watson test and its Application
4. Breusch-Godfrey (BG) test and its Application

▪ Graphical Method is adhoc method


▪ Run test is not a standard test and not included in many software's
▪ The last two are proper tests,
▪ We are going to discuss all of these and apply the last two in Eviews also

45

Graphical method
➢ Run a regression model and…..
1. Plot the values of the residuals, ut, chronologically
2. Plot of residuals (ut ) in period t against the residuals with its lag
residuals (ut-1 ), it can also be helpful

▪ If either or both of (i) and (ii) graph shows some pattern, it is an indication
of the autocorrelation

46

23
12/17/2022

Graphical method: Application

47

Practical Tasks 13.2


Graphical Approach to Detect Autocorrelation
▪ Use data from Table_ Autocorrelation from your LMS
Run the following Regression Model
▪ 𝑙𝑛𝐶𝑡 = 𝐵1 + 𝐵2 𝑙𝑛𝐷𝑃𝐼𝑡 + 𝐵3 𝑙𝑛𝑊𝑡 + 𝐵4 𝑅𝑡 + 𝑢𝑡
▪ Then to check Autocorrelation using Graphical Method we have to
generate following Two Graphs:
1. Residual Graph over time
2. Residual in time period t against residuals in time period t-1

The procedure for both these graphs are shown on next slide.

48

24
12/17/2022

Practical Tasks 13.2


Graphical Approach to Detect Autocorrelation
Graph: Residual over time Graph: Residual in time period t
against residuals in time period t-1
▪ Run the model: ▪ Run the model:
𝑙𝑛𝐶𝑡 = 𝐵1 + 𝐵2 𝑙𝑛𝐷𝑃𝐼𝑡 + 𝐵3 𝑙𝑛𝑊𝑡 + 𝑙𝑛𝐶𝑡 = 𝐵1 + 𝐵2 𝑙𝑛𝐷𝑃𝐼𝑡 + 𝐵3 𝑙𝑛𝑊𝑡 + 𝐵4 𝑅𝑡 + 𝑢𝑡
𝐵4 𝑅𝑡 + 𝑢𝑡
1. Generate residual series and 1. Generate residual series and save it as
save it as Residual Residual
2. For Residual plot….Plot 2. Create Residual_Lag1 by object>generate
Residual (open Series > and enter following equation:
Residual...click view>Graph> Residual_Lag1 = Residual(-1)
…press ok with all default 3. Open Residual & Residual_Lag1 as a
option) group
3. Freeze the Graph for 4. View>Graph> …select “Scatter” in
discussion specific box …and also under the details
and in box fit lines select Regression Line.
5. Freeze the Graph for discussion
49

Practical Tasks 13.2


Graphical Approach to Detect Autocorrelation
Graph: Residual in time period t
Graph: Residual over time against residuals in time period t-1

50

25
12/17/2022

Practical Tasks 13.2


Graphical Approach to Detect Autocorrelation

▪ The residual plot over time shows a pattern that may suggest
correlation among error terms over a period of time
▪ However, the plot of residuals at time t (ut) against residuals at lag
time (ut-1), and also including a regression line clearly suggest that
the residuals are positively correlated over time

51

The Runs Test : Theory


▪ Run Test is based on the sequences of forecasted values of error terms
▪ In absences of autocorrelation the error terms follow a random sequence
▪ A positive sequence followed by some negative sequences or vice versa…can be an indication of the
autocorrelation
▪ The RUN Test is based on the continuous “sequences” called “run”
▪ Consider 40 observations for which we run a model and forecasted error terms. Indicating residuals
with signs (+ or −) …(−−−−−−−−−)(+++++++++++++++++++++)(−−−−−−−−−−)
▪ There are 3 runs: a run of 9 -ive, a run of 21 + & again 10 ive error terms
▪ Relevant question: Are these 3 runs are too many (or too few) for these 40 observations of the data ….
while expecting the run to be random ?
▪ Too many runs indicates that residuals change sign frequently, thus indicating negative serial
correlation. Whereas, too few runs may suggest autocorrelation
▪ A 95% Confidence interval can be constructed for it as follows:
2𝑁 𝑁 2𝑁 𝑁 (2𝑁1 𝑁2 −𝑁)
▪ Prob[E(R) – 1.96 𝜎𝑅 ≤ R ≤ ER + 1.96𝜎𝑅 WHERE 𝐸 𝑅 = 𝑁1 2 + 1 AND 𝜎𝑅2 = 1 𝑁22 (𝑁−1)
▪ And N= total number of observations , N1 is number of positive residuals, N2 number of negative
residuals, and R is number of Runs
▪ If R value is within this interval we trust that error terms are random (or no problem of autocorrelation)

52

26
12/17/2022

Durbin-Watson d Test: Theory


▪ Consider a model: Yt = B1 + B2Xt + ut with ut = is the error term at time t

& ut = ρut-1 + νt (what does this mean ?)

▪ The parameter row (ρ) measure the autocorrelation between the adjacent
error terms

▪ The second equation shows that the error term in one period is directly
affected by the other period error, we called it serial/autocorrelation of order
1 or Autoregressive of order 1 represented as AR(1)

53

Durbin-Watson d Test: Theory


▪ Consider ut = ρut-1 + νt (Autocorrelation of Order 1)
▪ In Durbin-Watson d test we check the following hypothesis:
▪ Ho: ρ = 0 (No autocorrelation/Serial Correlation)
against
▪ H1: ρ ≠ 0 (There is autocorrelation/Serial Correlation)

▪ If the data (regression line) have the problem of autocorrelation, the


residuals will reflect that pattern and we can check that by using D test as
follows:

54

27
12/17/2022

Durbin-Watson d Test: Theory


▪ DW-D statistics is given as:
2
σ𝑛
𝑡=2(𝑒𝑡 −𝑒𝑡−1)
𝐷= σ𝑛 2
𝑡=1 𝑒𝑡
▪ Where et is the forecasted/predicted values of ut, as the ut are not observable
▪ With little mathematical work (that we do not need), it can be shown that DW
Statistics and autocorrelation coefficient (ρ) are related by following formula:
DW = 2 (1 – ρ )
▪ But the range of ρ is –1 < ρ < +1

▪ Thus, range of DW is : 0 < DW < 4


✓ When ρ = 0 →DW = 2 (1 – ρ ) is equal to 2. So DW = 2 (or close to 2) means “0” autocorrelation
✓ When ρ = +1 →DW = 2 (1 – ρ ) is equal to 0. So DW = 0 (or close to 0) means positive autocorrelation
✓ When ρ = −1 →DW = 2 (1 – ρ ) is equal to 4. So DW = 4 (or close to 4) means negative autocorrelation

55

Durbin-Watson d Test : Critical Regions


Durbin-Watson d statistic

Legend
𝐻0: No positive autocorrelation
Reject 𝐻0 Zone Zone Reject 𝐻0∗
Evidence of of 𝐻0∗ : No negative autocorrelation
Evidence
of positive indecis indecis of negative
auto- ion ion auto-
correlation Do not reject 𝐻0or
correlation
𝐻0∗ or both

d
0 dL du 2 4- du 4 –dL 4

➢ There are separate values of dL and dU as lower bound and upper bond used as threshold
to reject/accept Ho
➢ Durbin and Watson prepared tables that give the lower and upper limits of the d statistic
for a selected number of observations (up to 200) and a number of regressors (up to 10)
and for 5% and 1% levels of significance.

56

28
12/17/2022

Durbin-Watson (D) Test: Tabulated Values

57

Durbin-Watson (D) Test:


Assumptions
Assumptions of DW test are:
1. The regression model includes an intercept term.
2. The regressors are fixed in repeated sampling.
3. The error term follows the first-order autoregressive (AR1) scheme:
𝑢𝑡 = 𝜌𝑢𝑡−1 + 𝑣𝑡 (ρ (rho) is the coefficient of autocorrelation)
4. The error term is normally distributed.
5. The regressors do not include the lagged value(s) of the dependent
variable, Yt.

58

29
12/17/2022

Durbin-Watson d Test: Procedure


Procedure
1. Run the OLS regression and obtain the residuals.
2. Compute d from the formula
3. For the given sample size and given number of explanatory variables, find
out the critical dL and du values.
4. Now follow the decision rules.

Rule of thumb (Decision Rules)


✓For d = 2 (or closer), the evidence of no first-order (or +/–) autocorrelation.
✓For d=0 (or closer), the evidence of positive autocorrelation.
✓For d=4 (or closer), the evidence of negative autocorrelation.

59

Durbin-Watson (D) Test: Decision Rules


▪ The decision rules as Per Graph:
✓ 1. If d < dL, there probably is evidence of positive autocorrelation.
✓ 2. If d > dU, there probably is no evidence of positive autocorrelation.
✓ 3. If dL < d < dU, no definite conclusion about positive autocorrelation.
✓ 4. If dU < d < 4 - dU, probably there is no evidence of positive or negative
autocorrelation.
✓ 5. If 4 - dU < d < 4 - dL, no definite conclusion about negative autocorrelation.
✓ 6. If 4 - dL < d < 4, there probably is evidence of negative autocorrelation.
▪ d value always lies between 0 and 4
▪ The closer it is to zero, the greater is the evidence of positive
autocorrelation, and the closer it is to 4, the greater is the evidence of
negative autocorrelation. If d is about 2, there is no evidence of positive or
negative (first) order autocorrelation.

60

30
12/17/2022

Durbin-Watson d Test : Application

61

Practical Tasks 13.3


Durbin-Watson d Test
▪ The theory of DW-D statistics may
sound complex and lengthy
▪ ….HOWEVER its calculation in Eviews
software is straightforward and easy.
▪ When you run a model, DW-statistics is
also provided as a standard output along
with regression results in Eviews. As
shown for our current model in Table.
▪ In current case, the critical values for D-
statistics are DL = 1.452 & DU= 1.681 at
5%
▪ As 1.28 is lower than DL so it indicate a
positive autocorrelation.

62

31
12/17/2022

Breusch-Godfrey (BG) LM Test

This test allows for:


1. Lagged values of the dependent variables to be included as regressors
2. Higher-order autoregressive schemes, such as AR(2), AR(3), etc.
3. Moving average terms of the error term, such as ut-1, ut-2, etc.
▪ In this Test error term in the main equation follows the following AR(p)
autoregressive structure:
𝑢𝑡 = 𝜌1 𝑢𝑡−1 + 𝜌2 𝑢𝑡−2 + … … … + 𝜌𝑝 𝑢𝑡−𝑝 + 𝑣𝑡
▪ The null hypothesis of no autocorrelation is:
Ho: 𝜌1 = 𝜌2 = ⋯ … … = 𝜌𝑝 = 0
…. i.e., there is no autocorrelation of any order

63

Breusch-Godfrey (BG) LM Test (CONT.)


The BG test involves the following steps:
1. Run the regression Model and estimate the residuals (et)
2. Regress et on all regressors (explanatory variables) of the model. Include the p
autoregressive terms too. Obtain the R2 value from this regression.
3. If the sample size is large, BG have shown that: (n – p)R2 ~ 𝜒2p
✓ That is, in large samples, (n – p) times R2 follows the chi-square
distribution with p degrees of freedom (p is # of autoregressive terms in
model of step 2).
4. Rejection of the null hypothesis implies evidence of autocorrelation.
5. As an alternative way is to use F-Test based on restricted (original regression
model of step 1) and unrestricted (regression model run in step 2) with
having (p, n-k-p) degree of freedom.
6. Eviews does provides that both 𝜒2 and F-Tests values
64

32
12/17/2022

Breusch-Godfrey (BG) LM Test (CONT.)

Two important consideration for BG Test Application


1. The variance of errors ut and its lag values (ut-1, ut-2, etc.) must be
homoscedastic, if it is not the case we will have to use heteroscedasticity-
corrected variance, such as the White’s robust error terms.
2. A practical problem in the application of the BG test is the choice of the number of
lagged error terms.
✓ The value of p may depend on the type of time series.
✓ For monthly data, we may include 11 lagged error terms,
✓ For quarterly data we may include three lagged error terms,
✓ For annual data, we may include one lag error term
✓ In general, we can choose the optimum lag length by trial and error while ensuring
the lowest AIC/SIC criteria with being these lag significant in the regression model

65

F-Statistics for two models


▪ F-Statistics for two models (restricted and unrestricted) as follows:

(𝑅𝑆𝑆𝑹 − 𝑅𝑆𝑆𝑼𝑹 )/𝑝


𝐹= ∼ 𝐹𝑝,(𝑛−𝑘−𝑝)
𝑅𝑆𝑆𝑼𝑹 /(𝑛 − 𝑘 − 𝑝)

▪ Restricted model is the “original model”


▪ Unrestricted model is model of Errors being regressed on lag values of
error terms as a regressors
▪ k refers to number of estimated parameter in the restricted model

66

33
12/17/2022

Breusch-Godfrey (BG) LM Test : Application

67

Practical Tasks 13.4


Breusch-Godfrey (BG) LM Test
▪ Use data from Table_ Autocorrelation from your LMS
Run the following Regression Model
▪ 𝑙𝑛𝐶𝑡 = 𝐵1 + 𝐵2 𝑙𝑛𝐷𝑃𝐼𝑡 + 𝐵3 𝑙𝑛𝑊𝑡 + 𝐵4 𝑅𝑡 + 𝑢𝑡
Then to check Autocorrelation using Breusch-Godfrey LM Test you may proceed
as follows
1. In results window ….View>Residual Diagnostic > Serial Correlation LM Test
2. “Select Lag” … 1 here (why ?)
3. Press ok

The results are shown on next slide.

68

34
12/17/2022

Practical Tasks 13.4


Breusch-Godfrey (BG) LM Test
▪ Both Chi-Square and F-Statistics
reject Ho and suggesting presence of
autocorrelation
▪ The statistical significance of first lag
indicating the residual being
correlated with first lag only
▪ You may try using two, three lags and
observe how it influence the results.
▪ You decide the best among these all
based on the statistical significance of
each lag value and the lowest
AIC/SIC/HQC criteria

69

Remedial Measures for Autocorrelation

70

35
12/17/2022

Thank You

71

36

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy