0% found this document useful (0 votes)
61 views

Econometric Analysis MT Official Problem Set Solution 2

This document summarizes key points from 3 questions in a regression analysis problem set: 1) It shows that the sample variance estimator for the error term in a linear regression model is asymptotically unbiased using properties of projection matrices. 2) It proves the expectation of an indicator variable equals the probability of the event it indicates. 3) It explains that ordinary least squares (OLS) estimation for simultaneous equations models is subject to simultaneity bias, making the OLS estimator inconsistent. Good for learning bounding techniques.

Uploaded by

SylviaTian
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Econometric Analysis MT Official Problem Set Solution 2

This document summarizes key points from 3 questions in a regression analysis problem set: 1) It shows that the sample variance estimator for the error term in a linear regression model is asymptotically unbiased using properties of projection matrices. 2) It proves the expectation of an indicator variable equals the probability of the event it indicates. 3) It explains that ordinary least squares (OLS) estimation for simultaneous equations models is subject to simultaneity bias, making the OLS estimator inconsistent. Good for learning bounding techniques.

Uploaded by

SylviaTian
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

EC484 CLASSES: WEEK 3

KAMILA NOWAKOWICZ

Question 5, Problem Set 1

This question gives you more practice with regression analysis in matrix form. It uses the
common trick with the trace operator.
The conditions in this question are the following: y = Z + u, where u = (u1 , · · · , un )0 and
Z0Z
ui are iid (0,), so that u ⇠ (0, 2 I). Regressors Z are deterministic and M̂ =
2
n
! M > 0.
Pn
We want to show that ˆ 2 = n1 (yi zi0 ˆ)2 is asymptotically unbiased.
i=1

✓ˆn is asymptotically unbiased for ✓ if

lim E(✓ˆn ) = ✓
n!1

Notice that showing asymptotic unbiasedness does not involve any probabilistic argument.

Solution.
Step 1: Write ˆ 2 in matrix form.
Define ûi = yi z 0 ˆ and û = y
i Z ˆ = (û1 , · · · , ûn )0 . Then

û = y Z(Z 0 Z) 1 Z 0 y
= (I Z(Z 0 Z) 1 Z 0 ) y
| {z }
⌘MZ

= MZ (Z + u)
= MZ u

Matrix MZ is known as the “residual maker,” it is a projection matrix projecting onto


the space orthogonal to Z. It has a number of useful properties, including:
• MZ is symmetric (MZ0 = MZ ):

MZ0 = (I Z(Z 0 Z) 1 Z 0 )0 = I 0 Z 00 (Z 0 Z 00 ) 1 Z 0
=I Z(Z 0 Z) 1 Z 0 = MZ

These solutions are adapted from solutions by Chen Qiu, which were based on Prof Hidalgo’s notes and
solutions for EC484. Their aim is to fill some gaps between notes and exercises. Prof Hidalgo’s notes should
always be the final reference. If you spot any mistakes in this file please contact me: K.Nowakowicz@lse.ac.uk.
1
• MZ is idempotent (MZ MZ = MZ ):

MZ MZ = (I Z(Z 0 Z) 1 Z 0 )(I Z(Z 0 Z) 1 Z 0 )


=I 2Z(Z 0 Z) 1 Z 0 + Z(Z 0 Z) 1
Z 0 Z(Z 0 Z) 1 Z 0
| {z }
=I
0 1 0 0 1 0
=I 2Z(Z Z) Z + Z(Z Z) Z
=I Z(Z 0 Z) 1 Z 0 = MZ

• MZ is orthogonal to Z (MZ Z = 0):

MZ Z = (I Z(Z 0 Z) 1 Z 0 )Z
=Z Z(Z 0 Z) 1 Z 0 Z
=Z Z=0

We used the last property to show MZ y = MZ u above. We can use the other two
properties to show:

n
1X
2
ˆ = (yi zi0 ˆ)2
n i=1
n
1X 2
= ûi
n i=1
1 0
= û û
n
1
= (MZ u)0 (MZ u)
n
1
= u0 MZ0 MZ u
n
1
= u0 MZ u
n

Notice for ˆ (and also ˆ 2 ) to exist (Z 0 Z) 1


must be well-defined for all n ! 1 (a
sufficient condition is that it is positive definite).

Step 2: Calculate E(ˆ 2 ).


2
✓ ◆
1 0
E( ˆ2 ) = E u MZ u
n
1
= E (tr(u0 MZ u)) u0 MZ u is scalar
n
1
= E (tr(MZ uu0 )) tr(AB) = tr(BA)
n
1
= tr (E(MZ uu0 )) linearity of trace and expectation
n
1
= tr (MZ E(uu0 )) MZ is deterministic
n
2
= tr (MZ ) E(uu0 ) = 2
I
n
2
= tr I Z(Z 0 Z) 1 Z 0 definition of MZ
n
2
= tr (I) tr Z(Z 0 Z) 1 Z 0 tr(A + B) = tr(A) + tr(B)
n
2
= n tr (Z 0 Z) 1 Z 0 Z tr(In ) = n and tr(AB) = tr(BA)
n
2
= (n tr (Ik ))
n
2n k
= tr(Ik ) = k
n

Step 3: Find its limit:


2n k
E(ˆ 2 ) =
n
2
!

as n ! 1 since k is fixed.

Question 6, Problem Set 1

The conclusion of this question is simple, but extremely useful throughout the course.

Solution. Since 1(X 2 A) only takes two values:


8
<1 if X2A,
1(X 2 A) =
:0 otherwise,

it is a binary variable whose expectation can be calculated in the following way:

E {1(X 2 A)} = 1· P (X 2 A) + 0· P (X 2
/ A)
= P (X 2 A) .

Alternatively:
3
Z
E {1(X 2 A)} = 1(X 2 A)dFX (x)
Z
= dFX (x)
A

= P (X 2 A) .

where FX (x) is the cumulative distribution function of X.

Question 1, Problem Set 2

This question shows that usual OLS estimation for simultaneous equations usually are subject
to simultaneity bias, rendering OLS estimator inconsistent. Although this might be a simple
undergraduate concept, trying to prove this only using primitive conditions provided in this
question in non-trivial. Good exercise to get you started with all the bounding techniques!

Solution. The OLS formula for ˆ is:


P
n
(Yi Ȳ )(Ci C̄)
ˆ= i=1
P
n
(Yi Ȳ )2
i=1
P
n
(Yi Ȳ )(ui ū)
i=1
= + P
n by plugging in first equation
(Yi Ȳ )2
i=1

1
P
n
(Yi Ȳ )ui
n
i=1 Pn Pn
= + Pn by 1
n i=1 ū(Yi Ȳ ) = ū 1
n i=1 Yi Ȳ = ū(Ȳ Ȳ ) = 0
1
n
(Yi Ȳ )2
i=1
A
= +
B
P
n P
n
where A = 1
n
(Yi Ȳ )ui and B = 1
n
(Yi Ȳ )2 . By Slutzky’s Theorem it suffices to find
i=1 i=1
the probability limits of A and B.
Notice that in the question we are not given any assumptions on Y , just on I and u. Hence
we should rewrite Yi Ȳ in terms of I and u. By plugging the first equation into the second
we get the reduced form for Y :

Yi = C i + I i
= ↵ + Y i + ui + I i
↵ + ui + I i
) Yi =
(1 )
4
↵+ū+I¯
and Ȳ = (1 )
so that

1
Yi Ȳ = Ii I¯ + (ui ū)
1

Since < 1 the expressions are well-defined.


From the reduced form it is easy to see that Yi and ui will usually be correlated, violating
minimum condition that guarantee consistency for OLS estimators. This is why we expect to
see inconsistency in this question.

Step 1. Find the probability limit of A.

n
1X
A= (Yi Ȳ )ui
n i=1
n n
!
1 1X ¯ i+ 1
X
= (Ii I)u (ui ū)ui
1 n i=1 n i=1
1
= (D + E),
1

P
n
¯ i, E = P
n
where D = 1
n
(Ii I)u 1
n
(ui ū)ui . We would like to find probability limits for
i=1 i=1
D and E. The question does not give us assumptions we can use to show convergence in
probability directly, instead we are provided with moment conditions (expectations). A logical
thing to do in this situation is to use the moment conditions to show convergence in rth mean
for some convenient value of r, which implies convergence in probability to the same limit. We
have the following options:

• Conditions for convergence in second mean:


✓ˆ converges to ✓ in second mean if E[✓ˆ ✓]2 ! 0 (usually can be calculated)
• Conditions for convergence in first mean:
✓ˆ converges to ✓ in first mean, if E ✓ˆ ✓ = 0. This is usually difficult to calculate,
Pn
except special cases. But from Theorem 11 we know that n1 xi converges to 0 in
i=1
first mean if:
(1) xi are independent
(2) E(xi ) = 0 for each i
(3) xi are U.I.

We use the first approach to deal with D and the second to find the limit of E.

(1) Find the probability limit of D.


5
P
n
¯
Notice that Ii is non-stochastic so E(C) = 1
n
(Ii I)E(u i ) = 0 by assumption. We
i=1
p
can show that C ! E(C) = 0 by convergence in second mean.
2 !2 3
Xn
1 ¯ i 5
E(D2 ) = E 4 (Ii I)u
n i=1
2 0 13
n n X
n
6 1 BX X C7
= E6 B (Ii
4 n2 @
¯ 2 u2i +
I) (Ii ¯ j
I)(I ¯ i u j C7
I)u A5
i=1 i=1 j=1
i6=j
n
!
1 X
= 2 (Ii ¯ 2 Eu2
I) ui indep., mean 0
i
n i=1
!
1
n
X ⇣ ⌘ 2+2
 (Ii ¯ 2 E |ui |2+
I) Jensen’s inequality
n2 i=1
2 n
c 2+ X ¯2
 2 (Ii I) by assumption
n i=1
2 n
c 2+ 1 X ¯2
= (Ii I) by assumption
n n i=1
| {z }
! 2
I

!0 as n goes to infinity
p
Hence, D ! 0.
(2) Find the probability limit of E.

n
1X
E= (ui ū)ui
n i=1
n n
!2
1X 2 1X
= u ui
n i=1 i n i=1
=F G
✓ n ◆2
P
n P
where F = 1
n
ui and G = n1
2
ui .
i=1 i=1
P
n
p
(i). By assumption, F = 1
n
u2i ! u;
2
i=1
(ii). Since ui are independent, mean zero and U.I. (follows from E(|ui |2+ ) < 1), by
Theorem 11
n
1 X 1st
ui ! 0
n i=1
so also
n
1X p
ui ! 0.
n i=1
6
✓ ◆2
P
n
p
Thus G = 1
n
ui ! 0 as well by Slutzky’s Theorem
i=1
p
(iii). To sum up, E ! 2
u
p 2
(3) To conclude this step, we find A ! 1
u
. The fact that this limit is not zero is what
causes inconsistency. Now all that remains is to show that the limit of the denominator
is positive and finite.
P
n
Step 2. Find the probability limit of B = 1
n
(Yi Ȳ )2 .
i=1
Again, we first plug in reduced form for Yi , and it follows that:
n
1 1X 2
B= ui ū + Ii I¯
(1 )2 n i=1
1 Xh i
n
1 2 ¯ ¯ 2
= (ui ū) + 2 (ui ū) Ii I + Ii I
(1 )2 n i=1
(" n # " n # " n #)
1 1X 1X ¯ 1X 2
= (u i ū) u i + 2 u i I i I + Ii I¯
(1 )2 n i=1 n i=1 n i=1
( " n #)
1 1X 2
= E + 2D + Ii I¯
(1 )2 n i=1
p 1 2 2
! u +0+ I .
(1 )2
The last step recycles some of the work we have done before and uses the assumption given in
P
n
2
the question regarding the limit of Ii I¯ .
i=1

Step 3. Combining all results, we find


2
u
p 1
ˆ! + 1 2 2
(1 )2
( u + I)
2
(1 ) u
= + 2 2
u + I

6=

Which proves that the OLS esimator is inconsistent.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy