5_RV-II
5_RV-II
/
Probability density function (pdf )
/
Probability density function (pdf )
fX (x)
b−a
x
a b
/
Popular continuous r.v.s
fX (x)
/
Example
∙ Let X ∼ Exp(.) be the customer service time at a post office (in minutes).
The person ahead of you has been served for over minutes, what is
the probability that you will wait over more minutes before being served?
∞
∙ The probability that you’ve waited > minutes, P{X > } = ∫ .e−.x dx = e−.
We want to find: P{X > | X > }
By definition of conditional probability
P{X > , X > }
P{X > | X > } =
P{X > }
P{X > }
= , since {X > } ⊆ {X > }
P{X > }
∞
∫ .e−.x dx
= = e−.
e−.
Hence, the probability of waiting > more minutes given that you’ve waited
> minutes is equal to the unconditional probability of waiting > minutes !
/
Exponential is memoryless
This has the interpretation of the center of mass for a mass density
∙ The second moment (average power) is defined as
∞
E(X ) = x fX (x) dx
−∞
/
Mean and variance of continuous r.v.s
/
Mean and variance of continuous r.v.s
/
Another very popular continuous r.v.
where μ is the mean and σ is the variance Johann Carl Friedrich Gauss (-)
fX (x)
x
μ
E(X ) = x pX (x),
x∈X
n
Binom(n, p) pX (k) = pk ( − p)n−k , k = , , . . . , n np np( − p)
k
λ k −λ
Poisson(λ) pX (k) = e , k = , , . . . λ λ
k!
Unif[a, b] fX (x) = , x ∈ [a, b] (a + b)/ (b − a) /
b−a
(x−)
−
N(μ, σ ) fX (x) = e μ σ
πσ
/
Mixed random variables
∙ Many real-world r.v.s are mixed, i.e., have discrete and continuous components
∙ Example: A packet arrives at a router in a communication network.
If the input buffer is empty (probability p), the packet is serviced immediately.
Otherwise the packet must wait for a random continuous amount of time
Define the r.v. X to be the packet service time
X is neither discrete nor continuous, how do we specify it?
∙ We can use the cumulative distribution function (cdf ) to specify any r.v.
/
Cumulative distribution function (cdf )
∙ Like the pmf, the cdf is the probability of something, hence, ≤ FX (x) ≤
∙ The normalization axiom implies that FX (x)
a x
b
∙ FX (x) is monotonically nondecreasing, i.e., if b > a then FX (b) ≥ FX (a)
∙ The probability of any event can be computed from the cdf, e.g.,
P{X ∈ (a, b]} = P{a < X ≤ b}
= P{X ≤ b} − P{X ≤ a} (additivity)
= FX (b) − FX (a)
x x
∙ If a r.v. X is continuous with pdf fX (x), then its cdf is
x
FX (x) = P{X ≤ x} = fX (α) dα
−∞
∙ In fact the precise way to define a continuous r.v. is that its cdf is continuous
/
Cumulative distribution function (cdf )
/
Cdf of popular continuous r.v.s
∙ Uniform: X ∼ Unif[a, b]
if x < a
x x−a
FX (x) = ∫a dα = if a ≤ x ≤ b
b−a b−a
if x ≥ b
fX (x) FX (x)
b−a
x x
a b a b
/
Cdf of popular continuous r.v.s
∙ Exponential: X ∼ Exp(λ)
FX (x) = − e−λx for x ≥ , FX (x) = for x <
fX (x) FX (x)
λ
x x
/
Cdf of popular continuous r.v.s
∙ Gaussian: X ∼ N(μ, σ )
There is no closed form for the cdf; it’s found by numerical integration
∙ As will soon see, we only need to compute the cdf of the standard normal N(, )
x
−
Φ(x) = e dξ
−∞ π
N(, )
Q(x)
x ξ
/
Cdf of a mixed r.v.
FX (x)
p
x
/
Recap
∙ Discrete r.v. X specified by a pmf pX (x) ≥ , pX (x) =
x∈X
To find the probability of an event, sum the pmf over the points in the event
∞
∙ Continuous r.v. X specified by a pdf fX (x) ≥ , fX (x) dx =
−∞
To find the probability of an event, integrate the pdf over the event
∙ Any r.v. can be specified by a cdf FX (x) = P{X ≤ x} for − ∞ < x < ∞
FX (x) ≥ FX (x)
FX (∞) = and FX (−∞) =
FX (x) is monotonically nondecreasing
x
P{X = a} = FX (a) − FX (a− )
x
∙ For continuous r.v. X with pdf fX (x), FX (x) = fX (ξ) dξ
−∞
dFX (x)
∙ If FX (x) is differentiable (almost everywhere), fX (x) =
dx
/
Functions of a random variable
∙ Given a r.v. X, with known distribution (pmf, pdf, cdf ), and a function y = g(x),
we wish to find the distribution of Y
X g(x) Y
∙ Examples:
X is the input voltage to a circuit, Y is its output voltage
X is the input to a signal processor, Y is its output signal
X is sun light, Y is the output power of a photovoltaic system
X is wind speed, Y is the output power of a wind generator
∙ The function Y is a r.v. over the same sample space as X, i.e., Y(ω) = g(X(ω))
∙ However, we don’t assume knowledge of the underlying probability model
and wish to specify Y directly from the pmf, pdf, or cdf of X
/
Functions of a discrete random variable
x x x ... y y . . . y . . .
The probability of {Y = y} is the probability of its inverse image under g(x), i.e.,
pY (y) = pX (xi )
{x i : g(x i )=y}
/
Example
∙ Let X be a discrete r.v. with the pmf: pX () = /, pX () = /,
pX () = /, pX () = / and define the function
if x ≥
g(x) =
otherwise
Find the pmf of Y = g(X)
∙ Here is the mapping under g(x)
x y
pY (y) is the probability of the inverse image of y under g
pY () = pX (x) = pX () + pX () = + =
{x: g(x)=}
pY () = − pY () =
/
Derived densities
i.e., pY (y) is the sum of pX (x) over all x that yield g(x) = y
∙ This procedure does not immediately extend to deriving a pdf,
since the probability of each point is zero
∙ But the general approach extends nicely to cdfs
/
Derived densities
∙ To find the pdf of Y = g(X) from the pdf of X, we first find the cdf of Y as
FY (y) = P{g(X) ≤ y} = fX (x) dx
{x: g(x)≤y}
x g(x)
g(x) ≤ y y
{x : g(x) ≤ y}
x
y−b
a
y−b y−b
FY (y) = P{Y ≤ y} = P{aX + b ≤ y} = P X ≤ = FX
a a
dFY (y) y−b
Thus, fY (y) = = fX
dy a a
y−b
∙ For general a ̸= : fY (y) = fX
|a| a
/
Example
/
Example
(x−)
∙ Let X ∼ N(μ, σ ), i.e., fX (x) = e −
, and Y = aX + b
πσ
∙ Again, let’s use the formula for the derived density of a linear function
y−b
fY (y) = fX
|a| a
(y−b)
a
− μ
−
= e σ
| a | πσ
y − b − aμ
−
= e a σ for − ∞ < y < ∞
π(aσ)
/
Computing Gaussian cdf
∙ The fact that a linear function of Gaussian is Gaussian can be used to compute
the cdf of any Gaussian using the cdf of the standard normal X ∼ N(, )
∙ To compute the cdf of Y ∼ N(μY , σY ), we express it as Y = σY X + μY to obtain
y − μY y − μY
FY (y) = P{Y ≤ y} = P{σY X + μY ≤ y} = P X ≤ = FX
σY σY
And we obtain: P{. < Y < } = FY () − FY (.) = . − . = .
/
Derived densities – recap
∙ To find the pdf of Y = g(X) from the pdf of X, we first find the cdf of Y as
FY (y) = P{g(X) ≤ y} = fX (x) dx
{x: g(x)≤y}
x g(x)
g(x) ≤ y y
{x : g(x) ≤ y}
a
fX (x) = e−|x| , x ∈ (−∞, ∞) −
x
Find the cdf FY (y) of the output Y −a
∙ Consider fX (x)
For y < −a, FY (y) =
−
x
For y = −a, FY (y) = e dx = e−
−∞
For −a < y < , x = y/a and x
y/a FY (y)
x
Fy (y) = e dx = ey/a
−∞
For ≤ y < a,
−
−
e
y/a
−x
Fy (y) = + e dx = − e−y/a −
e
y
For y ≥ a, FY (y) = −a a
/
Summary of random variables
∙ Classification of r.v.s:
Discrete: specified by pmf
Continuous: specified by pdf
Mixed (all): specified by cdf
∙ Popular r.v.s: Bern(p), Geom(p), B(n, p), Poisson(λ); Unif[a.b], Exp(λ), N(μ, σ )
Binom(n, λ/n) → Poisson(λ) as n → ∞
Geom(p), Exp(λ) are memoryless
/