0% found this document useful (0 votes)
53 views

Sol 12

1. The document is a problem set from a course on random variables that includes problems involving covariance and jointly distributed random variables. 2. One problem involves finding the covariance between two random variables X and Y given their variances when added and subtracted. Another problem looks at how averaging observations can improve signal-to-noise ratio when the observations are correlated. 3. A linear minimum mean square error estimation problem determines the coefficients to estimate a variable Y from two uncorrelated observations X1 and X2 in order to minimize the mean squared error. The solution is provided in terms of expected values and variances.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Sol 12

1. The document is a problem set from a course on random variables that includes problems involving covariance and jointly distributed random variables. 2. One problem involves finding the covariance between two random variables X and Y given their variances when added and subtracted. Another problem looks at how averaging observations can improve signal-to-noise ratio when the observations are correlated. 3. A linear minimum mean square error estimation problem determines the coefficients to estimate a variable Y from two uncorrelated observations X1 and X2 in order to minimize the mean squared error. The solution is provided in terms of expected values and variances.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

University of Illinois Fall 2012

ECE 313: Problem Set 8: Problems and Solutions


Moments of jointly distributed random variables, minimum mean square
error estimation
Due: Wednesday December 5 at 4 p.m.
Reading: 313 Course Notes Sections 4.8-4.9

1. [Covariance I]
Consider random variables X and Y on the same probability space.

(a) If Var(X + 2Y ) = 40 and Var(X − 2Y ) = 20, what is Cov(X, Y )?


Solution:

Var(X + 2Y ) = Cov(X + 2Y, X + 2Y )


= Var(X) + 4Var(Y ) + 4Cov(X, Y ) = 40

Similarly, Var(X −2Y ) = Cov(X −2Y, X −2Y ) = Var(X)+4Var(Y )−4Cov(X, Y ) = 20.


Taking the difference of the two equations describing Var(X + 2Y ) and Var(X − 2Y )
yields Cov(X, Y ) = 2.5.
(b) In part (a), determine ρX,Y if Var(X) = 2 · Var(Y ).
Solution: Adding the two equations describing Var(X + 2Y ) and Var(X − 2Y ), we get

2Var(X) + 8Var(Y ) = 60
12Var(Y ) = 60

Hence, Var(Y ) = 5, Var(X) = 10, and


Cov(X, Y )
ρX,Y = p = 0.3536
Var(X)Var(Y )

2. [Covariance II]
Suppose X and Y are random variables on some probability space.

(a) If Var(X + 2Y ) = Var(X − 2Y ), are X and Y uncorrelated ?


Solution: Expanding each side of Var(X + 2Y ) = Var(X − 2Y ) yields:
Var(X) + 4Cov(X, Y ) + 4Var(Y ) = Var(X) − 4Cov(X, Y ) + 4Var(Y ) implying that
Cov(X, Y ) = 0. Hence, X and Y are uncorrelated.
(b) If Var(X) = Var(Y ), are X and Y uncorrelated?
Solution: No. The condition Var(X) = Var(Y ) does not imply that Cov(X, Y ) = 0.

3. [Covariance III]
Rewrite the expressions below in terms of Var(X), Var(Y ), Var(Z), and Cov(X, Y ).

(a) Cov(3X + 2, 5Y − 1)
Solution: Cov(3X + 2, 5Y − 1) = Cov(3X, 5Y ) = 15Cov(X, Y ).
(b) Cov(2X + 1, X + 5Y − 1).
Solution:

Cov(2X + 1, X + 5Y − 1) = Cov(2X, X + 5Y ) = Cov(2X, X) + Cov(2X, 5Y )


= 2Cov(X, X) + 10Cov(X, Y ) = 2Var(X) + 10Cov(X, Y )

(c) Cov(2X + 3Z, Y + 2Z) where Z is uncorrelated to both X and Y .


Solution:

Cov(2X + 3Z, Y + 2Z) = Cov(2X, Y ) + Cov(2X + 2Z) + Cov(3Z, Y ) + Cov(3Z, 2Z)


= 2Cov(X, Y ) + 4Cov(X, Z) + 3Cov(Z, Y ) + 6Cov(Z, Z)
= 2Cov(X, Y ) + 6Var(Z)

4. [Covariance IV]
Random variables X1 and X2 represent two observations of a signal corrupted by noise. They
have the same mean µ and variance σ 2 . The signal-to-noise-ratio (SN R) of the observation
2
X1 or X2 is defined as the ratio SN RX = σµ2 . A system designer chooses the averaging
strategy, whereby she constructs a new random variable S = X1 +X2
2
.

(a) Show that the SN R of S is twice that of the individual observations, if X1 and X2 are
uncorrelated.
Solution: In general, for S = X1 +X
2
2
.
 
X1 + X2
E[S] = µS = E =µ
2
Var(X1 + X2 ) 2σ 2 + 2Cov(X1 , X2 ) σ 2 + Cov(X1 , X2 )
σS2 = = =
4 4 2
2µ 2
SN RS =
σ 2 + Cov(X1 , X2 )
2
Thus, if X1 and X2 are uncorrelated, SN RS = 2µ σ2
= 2SN RX . Thus, averaging im-
proves the SN R by a factor equal to the number of observations being averaged, if the
observations are uncorrelated.
(b) The system designer notices that the averaging strategy is giving SN RS = (1.5)SN RX .
She correctly assumes that the observations X1 and X2 are correlated. Determine the
value of the correlation coefficient ρX1 X2 .
Solution: Since Cov(X1 , X2 ) = σ 2 ρX1 ,X2 , the formula above for SN RS is equivalent to

2µ2
SN RS = .
σ 2 (1 + ρX1 X2 )
2
Setting SN RS equal to 1.5 σµ2 yields ρXY = 31 .
(c) Under what condition on ρX,Y can the averaging strategy result in an SN RS that is
arbitrarily high?
Solution: SN RS → ∞ as ρX1 X2 → −1.
2
5. [Linear minimum MSE estimation from uncorrelated observations]
Suppose Y is estimated by a linear estimator, L(X1 , X2 ) = a + bX1 + cX2 , such that X1 and
X2 have mean zero and are uncorrelated with each other.
(a) Determine a, b and c to minimize the MSE, E[(Y − (a + bX1 + cX2 ))2 ]. Express your
answer in terms of E[Y ], the variances of X1 and X2 , and the covariances Cov(Y, X1 )
and Cov(Y, X2 ).
Solution: The MSE can be written as E[((Y − bX1 − cX2 ) − a)2 ], which is the same as
the MSE for estimation of Y − bX1 − cX2 by the constant a. The optimal choice of a is
E[Y − bX1 − cX2 ] = E[Y ]. Substituting a = E[Y ], the MSE satisfies
MSE = Var(Y − bX1 − cX2 )
= Cov(Y − bX1 − cX2 , Y − bX1 − cX2 )
= Cov(Y, Y ) + b2 Cov(X1 , X1 ) − 2bCov(Y, X1 ) + c2 Cov(X2 , X2 ) − 2cCov(Y, X2 )
= Var(Y ) + b2 Var(X1 ) − 2bCov(Y, X1 ) + c2 Var(X2 ) − 2cCov(Y, X2 ) . (1)
 

The MSE is quadratic in b and c and the minimizers are easily found to be b = Cov (Y,X1 )
Var(X1 )
and c = Cov (Y,X2 ) Cov(Y,X1 ) Cov(Y,X2 )
Var(X2 ) . Thus, L(X1 , X2 ) = E[Y ] + Var(X1 ) X1 + Var(X2 ) X2 .
(b) Express the MSE for the estimator found in part (a) in terms of the variances of X1 ,
X2 , and Y and the covariances Cov(Y, X1 ) and Cov(Y, X2 ).
Solution: Substituting the values of b and c found into (1) yields
Cov(Y, X1 )2 Cov(Y, X2 )2
MSE = Var(Y ) − − .
Var(X1 ) Var(X2 )

6. [An estimation problem]


Suppose X and Y have the following joint pdf:
(
8uv
(15)4
u ≥ 0, v ≥ 0, u2 + v 2 ≤ (15)2
fX,Y (u, v) =
0 else

(a) Find the constant estimator, δ ∗ , of Y, with the smallest mean square error (MSE), and
find the MSE.
Solution: We know δ ∗ = E[Y ], and the resulting MSE is Var(Y ). We could directly
compute the first and second moments of Y, but it is about the same amount work if fY
is found first, so we find fY . The support of fY is [0,15]. For 0 ≤ v ≤ 15,

Z √225−v2 2
4u2 v 225−v v2
 
8uv 4v
fY (v) = du = = 1−
0 154 154 u=0 225 225
Thus,
15
4v 2 v2
Z  

δ = E[Y ] = 1− dv = 8,
0 225 225
and
15
4v 3 v2
Z  
2
E[Y ] = 1− dv = 75,
0 225 225
so MSE(using δ ∗ )=Var(Y ) = 75 − 82 = 11.
3
(b) Find the unconstrained estimator, g ∗ (X), of Y based on observing X, with the smallest
MSE, and find the MSE.
Solution: We know g ∗ (u) = E[Y |X = u]. To compute g ∗ we thus need to find fY |X (v|u).
By symmetry, X and Y have the same distribution, so
(  
4u u2
225 1 − 225 u≥0
fX (u) = fY (u) =
0 else.
Thus, fY |X (v|u) is well defined for 0 ≤ u ≤ 15. For such u,
 2v √
fX,Y (u, v) 225−u2 0 ≤ v ≤ 225 − u2
fY |X (v|u) = =
fX (u) 0 else.
That
h √ is, for ui fixed, the conditional pdf of Y has a triangular shape over the interval
0, 225 − u2 . Thus, for 0 ≤ u ≤ 15,
Z √225−u2 2

2v 2 225 − u2
g ∗ (u) = dv = .
0 225 − u2 3
To compute the MSE for g ∗ we find
Z 15 Z 15
4(225 − u2 ) 4u u2
 
∗ 2 ∗ 2 200
E[g (X) ] = g (u) fX (u)du = 1− du = .
0 0 9 225 225 3
Therefore, MSE(using g ∗ ) = E[Y 2 ] − E[g ∗ (X)2 ] = 25
3 = 8.333 . . . .

(c) Find the linear estimator, L (X), of Y based on observing X, with the smallest MSE,
and find the MSE. (Hint: You may use the fact E[XY ] = 75π 4 ≈ 58.904, which can be
derived using integration in polar coordinates.)
Solution: Using the hint, Cov(X, Y ) = E[XY ] − E[X]E[Y ] = 75π 4 − 64 ≈ −5.0951.
Thus,
Cov(X, Y )
L∗ (u) = E[Y ] + (u − E[X]) = 8 − (0.4632)(u − 8)
Var(X)
and
Cov(X, Y )2
MSE(using L∗ ) = Var(Y ) − = 8.6400
Var(X)
The three estimators are shown in the plot:

.
4

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy