Econometric Analysis MT Official Problem Set Solution 2
Econometric Analysis MT Official Problem Set Solution 2
KAMILA NOWAKOWICZ
This question gives you more practice with regression analysis in matrix form. It uses the
common trick with the trace operator.
The conditions in this question are the following: y = Z + u, where u = (u1 , · · · , un )0 and
Z0Z
ui are iid (0,), so that u ⇠ (0, 2 I). Regressors Z are deterministic and M̂ =
2
n
! M > 0.
Pn
We want to show that ˆ 2 = n1 (yi zi0 ˆ)2 is asymptotically unbiased.
i=1
lim E(✓ˆn ) = ✓
n!1
Notice that showing asymptotic unbiasedness does not involve any probabilistic argument.
Solution.
Step 1: Write ˆ 2 in matrix form.
Define ûi = yi z 0 ˆ and û = y
i Z ˆ = (û1 , · · · , ûn )0 . Then
û = y Z(Z 0 Z) 1 Z 0 y
= (I Z(Z 0 Z) 1 Z 0 ) y
| {z }
⌘MZ
= MZ (Z + u)
= MZ u
MZ0 = (I Z(Z 0 Z) 1 Z 0 )0 = I 0 Z 00 (Z 0 Z 00 ) 1 Z 0
=I Z(Z 0 Z) 1 Z 0 = MZ
These solutions are adapted from solutions by Chen Qiu, which were based on Prof Hidalgo’s notes and
solutions for EC484. Their aim is to fill some gaps between notes and exercises. Prof Hidalgo’s notes should
always be the final reference. If you spot any mistakes in this file please contact me: K.Nowakowicz@lse.ac.uk.
1
• MZ is idempotent (MZ MZ = MZ ):
MZ Z = (I Z(Z 0 Z) 1 Z 0 )Z
=Z Z(Z 0 Z) 1 Z 0 Z
=Z Z=0
We used the last property to show MZ y = MZ u above. We can use the other two
properties to show:
n
1X
2
ˆ = (yi zi0 ˆ)2
n i=1
n
1X 2
= ûi
n i=1
1 0
= û û
n
1
= (MZ u)0 (MZ u)
n
1
= u0 MZ0 MZ u
n
1
= u0 MZ u
n
as n ! 1 since k is fixed.
The conclusion of this question is simple, but extremely useful throughout the course.
E {1(X 2 A)} = 1· P (X 2 A) + 0· P (X 2
/ A)
= P (X 2 A) .
Alternatively:
3
Z
E {1(X 2 A)} = 1(X 2 A)dFX (x)
Z
= dFX (x)
A
= P (X 2 A) .
This question shows that usual OLS estimation for simultaneous equations usually are subject
to simultaneity bias, rendering OLS estimator inconsistent. Although this might be a simple
undergraduate concept, trying to prove this only using primitive conditions provided in this
question in non-trivial. Good exercise to get you started with all the bounding techniques!
1
P
n
(Yi Ȳ )ui
n
i=1 Pn Pn
= + Pn by 1
n i=1 ū(Yi Ȳ ) = ū 1
n i=1 Yi Ȳ = ū(Ȳ Ȳ ) = 0
1
n
(Yi Ȳ )2
i=1
A
= +
B
P
n P
n
where A = 1
n
(Yi Ȳ )ui and B = 1
n
(Yi Ȳ )2 . By Slutzky’s Theorem it suffices to find
i=1 i=1
the probability limits of A and B.
Notice that in the question we are not given any assumptions on Y , just on I and u. Hence
we should rewrite Yi Ȳ in terms of I and u. By plugging the first equation into the second
we get the reduced form for Y :
Yi = C i + I i
= ↵ + Y i + ui + I i
↵ + ui + I i
) Yi =
(1 )
4
↵+ū+I¯
and Ȳ = (1 )
so that
1
Yi Ȳ = Ii I¯ + (ui ū)
1
n
1X
A= (Yi Ȳ )ui
n i=1
n n
!
1 1X ¯ i+ 1
X
= (Ii I)u (ui ū)ui
1 n i=1 n i=1
1
= (D + E),
1
P
n
¯ i, E = P
n
where D = 1
n
(Ii I)u 1
n
(ui ū)ui . We would like to find probability limits for
i=1 i=1
D and E. The question does not give us assumptions we can use to show convergence in
probability directly, instead we are provided with moment conditions (expectations). A logical
thing to do in this situation is to use the moment conditions to show convergence in rth mean
for some convenient value of r, which implies convergence in probability to the same limit. We
have the following options:
We use the first approach to deal with D and the second to find the limit of E.
!0 as n goes to infinity
p
Hence, D ! 0.
(2) Find the probability limit of E.
n
1X
E= (ui ū)ui
n i=1
n n
!2
1X 2 1X
= u ui
n i=1 i n i=1
=F G
✓ n ◆2
P
n P
where F = 1
n
ui and G = n1
2
ui .
i=1 i=1
P
n
p
(i). By assumption, F = 1
n
u2i ! u;
2
i=1
(ii). Since ui are independent, mean zero and U.I. (follows from E(|ui |2+ ) < 1), by
Theorem 11
n
1 X 1st
ui ! 0
n i=1
so also
n
1X p
ui ! 0.
n i=1
6
✓ ◆2
P
n
p
Thus G = 1
n
ui ! 0 as well by Slutzky’s Theorem
i=1
p
(iii). To sum up, E ! 2
u
p 2
(3) To conclude this step, we find A ! 1
u
. The fact that this limit is not zero is what
causes inconsistency. Now all that remains is to show that the limit of the denominator
is positive and finite.
P
n
Step 2. Find the probability limit of B = 1
n
(Yi Ȳ )2 .
i=1
Again, we first plug in reduced form for Yi , and it follows that:
n
1 1X 2
B= ui ū + Ii I¯
(1 )2 n i=1
1 Xh i
n
1 2 ¯ ¯ 2
= (ui ū) + 2 (ui ū) Ii I + Ii I
(1 )2 n i=1
(" n # " n # " n #)
1 1X 1X ¯ 1X 2
= (u i ū) u i + 2 u i I i I + Ii I¯
(1 )2 n i=1 n i=1 n i=1
( " n #)
1 1X 2
= E + 2D + Ii I¯
(1 )2 n i=1
p 1 2 2
! u +0+ I .
(1 )2
The last step recycles some of the work we have done before and uses the assumption given in
P
n
2
the question regarding the limit of Ii I¯ .
i=1
6=