0% found this document useful (0 votes)
238 views

Probabilty Distributions

This document discusses probability distributions and key concepts related to random variables. It defines discrete and continuous random variables and explains how their associated probabilities define discrete and continuous probability distributions. The properties of the probability mass function and probability density function are provided. Mathematical expectation is defined for discrete random variables as the sum of each value multiplied by its probability. Variance is introduced as the expected value of the squared difference between a random variable and its expected value.

Uploaded by

K.Prasanth Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
238 views

Probabilty Distributions

This document discusses probability distributions and key concepts related to random variables. It defines discrete and continuous random variables and explains how their associated probabilities define discrete and continuous probability distributions. The properties of the probability mass function and probability density function are provided. Mathematical expectation is defined for discrete random variables as the sum of each value multiplied by its probability. Variance is introduced as the expected value of the squared difference between a random variable and its expected value.

Uploaded by

K.Prasanth Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

Probability Distribution

Random variable:
A variable whose value is determined by the outcome of a random experiment is called
a random variable. Random variable is usually denoted by X.
A random variable may be discrete or continuous.

A discrete random variable is defined over a discrete sample space, i.e. the sample
space whose elements are finite.
Ex: 1) Number of heads falling in a coin tossing
2) Number of points obtained when a die is thrown etc.
Discrete random variable takes values such as 0, 1, 2, 3, 4, …….
For example in a game of rolling of two dice there are certain outcomes.
Let X be a random variable defined as ‘total points on two dice”, then X takes values
2,3,4,5,6,7,8,9,10,11,12. Here X is a discrete variable.

A random variable which is defined over a continuous samplespace, i.e. the sample
space whose elements are infinite is called a continuous random variable.
Ex: 1) All possible heights, weights, temperature, pressure etc.
Continuous random variable assumes the values in some interval (a,b) where a and b
may be ∞ and − ∞

Probability distribution:
The values taken by a discrete random variable such as X, and its associated
probabilities P(X=x) or simply P(x) define a discrete probability distribution and P(X=x) is
called “Probability mass function”.
.
The probability mass function (pmf) or probability function has the following properties.
1) P ( x) ≥ 0 , ∀ x

2) ∑P( X
x
= x) = 1

Ex: Suppose take a random experiment tossing of a coin two times. Then the sample space is
S= {HH, HT, TH, TT}
Let X is a random variable denotes the ‘number of heads’ in the possible outcomes then
X takes values 0, 1, 2. i.e. X=0, X=1 and X=2.
And their corresponding probabilities are
P(X=0) =1/4
P(X=1) =1/2
P(X=2) =1/4
So denoting possible values of X and by x and their probabilities by P(X=x), we have the
following probability distribution.
X 0 1 2
P(X=x) 1/4 2/4 1/4

And P(X=0) + P(X=1) + P(X=2) = 1.


The values taken by a continuous random variable X, and its associated probabilities
P(X=x) or simply P(x) define a continuous probability distribution and P(X=x) is called
“Probability density function” and it has following properties
1) P ( x) ≥ 0 , ∀ − ∞ < x < ∞

2) ∫ P( X
−∞
= x) dx = 1

Mathematical Expectation:
If X denotes a discrete random variable which can assume the values
k
x1 , x 2 , x3 ,..... x k with respective probabilities p1 , p 2 , p 3,......... ..., pk where ∑p
i =1
i = 1 .The

Mathematical Expectation of X denoted by E(X) is defined as


k
E(X) = ∑p x
i =1
i i

Ex: 1) A coin is tossed two times. Find the mathematical expectation of getting heads?
Sol: Define X as number of heads in tossing of two coins.
So X takes values 0, 1, 2.
And P(X=0) = 1/4
P(X=1) = 2/4
P(X=2) = 1/4

k
Now E(X) = ∑p xi =1
i i

1 2 1
= 0 × +1 × + 2 ×
4 4 4
⇒ E(X) =1.
Ex: 2) A die is thrown once. Find the mathematical expectation of the point on the upper face of the
die.
Sol: Define a random variable X as the point on the upper face of the die.
So X takes values 1, 2, 3, 4, 5, 6.
The probability distribution is

X 1 2 3 4 5 6
P(X=x) 1/6 1/6 1/6 1/6 1/6 1/6
Now
k
E(X) = ∑p x
i =1
i i

1 1 1 1 1 1
=1 × + 2 × + 3 × + 4 × + 5 × + 6 ×
6 6 6 6 6 6
21
= = 3.5
6

Def: The mathematical expectation of g(x) is defined as


k k
E [g(x)] = ∑g ( x ) p( x )
i =1
i i provided ∑ g ( x ) p( x ) is finite
i =1
i i

Putting g ( x ) = x 2 , x 3 ,2 x + 4
k
We get E [ x 2 ] = ∑x
i =1
i
2
p ( xi )
k

∑x 3
E [ x3 ] = i p ( xi )
i =1
k
E [ 2x + 3 ] = ∑(2 x
i =1
i + 3) p( xi )

Theorem:1) If a and b are constants then E [ ax + b ] = a E( x ) + b


k
Proof: E [ ax + b ]= ∑(ax
i =1
i + b) p ( xi )
k k
= ∑ax i p ( xi ) + ∑bp ( xi )
i =1 i =1
=a E( x )+b
Theorem:2) If g(x) and h(x) are any two functions of a discrete random variable X, then
E[g(x) ± h(x)]= E[g(x)] ± E[h(x)]
Proof:
k
E [g(x) ± h(x)]= ∑[ g ( x ) ± h( x )] p( x )
i =1
i i i

k k
= ∑ g ( x i ) p ( xi ) ± ∑h( xi ) p ( xi )
i =1 i =1
= E [g(x)] ± E[h(x)]

Def: The ‘Variance’ of a random variable ‘X’ is given by


µ2 = E[ X − E ( X )] 2

Theorem:3) E[ X − E ( X )] 2 = E [ X 2 ] - [E(X)]2
Proof:
k
E[ X − E ( X )] 2 = ∑[ x
i =1
i ± E ( X )] 2 p ( xi )
k k k
= ∑ xi p ( xi ) + ∑[ E ( X )] p ( xi ) - ∑2 xi E ( X ) p ( xi )
2 2

i =1 i =1 i =1
k k
= E [ X 2 ] + [ E ( X )] 2 ∑ p( xi ) -2 E [ X ] ∑xi p( xi )
i =1 i =1
k
= E [ X ] + [ E ( X )] -2 E [ X ] E [ X ]
2 2
[ ∑ p( x )
i =1
i

=1]
= E [ X 2 ] - [E(X)]2
Theorem:4) If X is a random variable and a and b are constants then V [ aX + b ] = a 2 V(X) .
Proof:

V [ aX + b ] = E[( aX + b) − E (aX + b)] 2


= E[( aX + b) − aE ( X ) − E (b)] 2
= E[( aX − aE ( X )] 2
= E[a ( X − E ( X ))] 2
= a 2 E[( X − E ( X )] 2
= a 2 V(X).
Joint probability distribution:
If X and Y are two discrete random variables, the probability for the simultaneous
occurrence of X and Y can be represented by P( X = x, Y = y ) or P ( x, y ) , then a joint
probability distribution is defined by all the possible values of X and Y associated with the
joint probability density function P ( x, y ) . (X, Y) is said to be a two dimensional random
variable.
In discrete case P ( x, y ) have the following properties
i) P( x, y ) ≥ 0 ∀x and y
ii) ∑∑P( x, y ) =1
x y

Ex: If a coin is tossed two times


Define X as the result on the first coin i.e. H , T
Y as the result on the first coin i.e. H , T
Now the joint probability distribution of X and Y can be constructed as below
X
Y H T
H 1/4 1/4

T 1/4 1/4

Marginal density function:


Given that the joint probability function P ( x, y ) of the discrete random variables X and Y,
the marginal density function of x is defined as P ( x) = ∑
y
P ( x, y )
∀x

the marginal density function of y is defined as Q( y ) = ∑ P ( x, y ) ∀ y


x

Conditional probability function:


Given that the joint probability function P ( x, y ) of the discrete random variables X and Y,
P ( x, y )
the Conditional probability function of x given y is defined as P( x / y ) = , Q( y )
Q( y )
>0
P ( x, y )
the Conditional probability function of y given x is defined as P ( y / x) = , P( x)
P( x)
>0

Where Q( y ) and P ( x ) are the marginal density functions of y and x respectively.


Independent random variables:
Two discrete random variables X and Y are said to be independent if

P ( x, y ) = P ( x ) P ( y ) ∀x and y

Note: In case of independent random variables


P( x / y ) = P ( x ) ∀x and y
and P ( y / x) = P ( y ) ∀x and y
Theorem: If X and Y are two discrete random variables with joint probability function
P ( x, y ) then E [g(X) ± h(Y)] = E [g(X)] ± E [h(Y)].
Proof:
E [g(x) ± h(y)] = ∑∑ [ g ( xi ) ± h( y i )] P ( x , y )
i j
i j

= ∑∑g ( x
i j
) ± ∑∑h( y ) P( x , y )
i ) P x ,y
i (
j
i j
i
i j

= ∑g ( x ) P( x ) ± ∑h( y ) P ( y ) [ P( x ) = ∑P( x , y
i
i
j
j
j i
j
i j )
i

P ( y ) = ∑P ( x , y ) ] j i j
i

= E [g(X)] ± E [h(Y)]
Similarly putting g(X)=X and h(Y)=Y in the above theorem we get

E [X ± Y] = E [X] ± E [Y].
Theorem: If X and Y are two independent discrete random variables with joint probability
function P ( x, y ) then E ( XY ) = E ( X ) E (Y )
Proof:
E ( XY ) = ∑∑xi y j P xi , y j
i j
( )
= ∑∑
i j
xi y j P ( x ) P ( y )
i i [ X and Y are independent]

=∑ xi P( x )
i
∑y j P( y j )
i j

= E ( X ) E (Y )
Covariance of X and Y:
Covariance of X and Y is written symbolically as Cov(X, Y) or σ XY and is defined as
Cov(X, Y) = E[ X − E ( X )][ Y − E (Y )]
And
Cov(X, Y) = E[ X − E ( X )][ Y − E (Y )]
= E[ XY −YE ( X ) − XE (Y ) + E ( X ) E (Y )]
= E[ XY ] − E (Y ) E ( X ) − E ( X ) E (Y ) + E ( X ) E (Y )
= E[ XY ] − E ( X ) E (Y )

Theorem: If X and Y are independent then Cov(X, Y) = 0


Proof:
If X and Y are independent then we have
E ( XY ) = E ( X ) E (Y )
=> E ( XY ) − E ( X ) E (Y ) = 0
=> Cov(X, Y) = 0
Theorem: If X and Y are two discrete random variables, then
V ( aX + bY ) =a 2V ( X ) + b 2V (Y ) + 2abCov ( X , Y )
Proof:
V (aX + bY ) = E[( aX + bY ) − E ( aX + bY )] 2
= E[( aX + bY ) − aE ( X ) −bE (Y )] 2
= E[ a{ X − E ( X )} + b{Y − E (Y )}] 2
=
E[ a 2 { X − E ( X )} 2 + b 2 {Y − E (Y )} 2 + 2ab{ X − E ( X )}{ Y − E (Y )}]
=
a 2 E[ X − E ( X )] 2 + b 2 E[Y − E (Y )] 2 + 2abE [ X − E ( X )][ Y − E (Y )]
= a 2V ( X ) +b 2V (Y ) + 2abCov ( X , Y )
Note: If X and Y are independent then V ( aX + bY ) =a 2V ( X ) + b 2V (Y ) [ Cov(X, Y) =
0]
Correlation coefficient:
If E ( X 2 ) and E (Y 2 ) exist, the correlation coefficient (is measure of relationship) between X
and Y is defined as
Cov ( X , Y )
ρ=
S .D ( X ) S .D (Y )
E[ X − E ( X )][ Y − E (Y )]
=
E[ X − E ( X )] 2 E[Y − E (Y )] 2
The sign of ρ is determined by the sign of Cov(X, Y)
Note: If two variables X and Y are independent then ρ =0 and X and Y are said to be
uncorrelated.

EXCERSICE
1) Let X be a random variable having the following probability distribution.
X 1 2 3 4 5
P(X=x 1/6 1/3 0 1/3 1/6
)
Find the expected values and the variance of i) X ii) 2X+1 iii) 2X-3
2) X is a randomly drawn number from the set {1,2,3,4,5,6,7,8,9,10}. Find E(2X-3).
3) X and Y are independent random variables with V(X) = V(Y) = 4. Find V(3X-2Y).
4) A bowl contains 6 chits, with numbers 1,2 and 3 written on two, three and one chits
respectively. If X is the observed number on a randomly drawn chit, find E(X2).
5) Define independence of two random variables. For independent random variables X
and Y, assuming the addition and multiplication law of expectation, prove that
V ( aX + bY ) =a 2V ( X ) + b 2V (Y )
6) Let us consider the experiment of tossing an honest coin twice. The random variable X
takes values 0 or 1 according as head or tail appears at the result of the first toss. The
random variable Y takes values either 0 or 1 according as whether head or tail appears
as a result of second toss. Show that X and Y are independent.
7) Ram and Shyam alternately toss a die, Ram starting the process. He who throws 5 or 6
first gets a prize of Rs.10 and the game ends with the award of the prize. Find the
expectation of the Ram’s gain.
8) Clearly explain the conditions under which, E ( XY ) = E ( X ) E (Y ) also prove the
relation.
9) In a particular game a gambler can win a sum of Rs.100 with probability 2/5 and lose a
sum of Rs.50 with probability 3/5. What is the mathematical expectation of his gain.
10) A business concern consists of 5 senior level and 3 junior level executives. A
committee is to be formed by taking 3 executives at random. Find the expected number
of senior executives to be in the committee.
11) There are two boxes: a white colored box and a red colored box. Each box contains 3
balls marked 1, 2, and 3. One ball is drawn from each box and their numbers are
marked. Let X denotes the number observed from white box and Y denotes the number
observed from the red box. Show that i) E(X+Y) = E(X) +E(Y)
ii) E (XY) = E(X) E(Y)
iii) V(X+Y) = V(X) + V(Y)
12) Calculate the mean and the standard deviation of the distribution of random digits, that
is, f(X) = 1/10, X=0,1,2,…….,9
13) A person picks up 4 cards at random from a full deck. If he receives twice as many
rupees as the number of aces he gets, find the expected gain.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy