Chapter 1 - Basic Principles of Monte Carlo Methods
Chapter 1 - Basic Principles of Monte Carlo Methods
• Korn, Korn and Kroisandt, 2010, Monte Carlo Methods and Models in Finance and
Insurance
• Glasserman, 2003, Monte Carlo Methods in Financial Engineering
Evaluation
1. No final evaluation
2. The evaluation is based on a group project (group of 2 or 3 depending on the total number
of students taking this class)
• Written report
• R code
• Idea:
• carry out a large number of independent experiments which all have the distribution of 𝑋
• calculate the arithmetic average of the obtained results
• approximate the expected value 𝐸(𝑋) by this average value
The Crude Monte Carlo Method
• Assume that we want to estimate 𝜇 and that there exists a random variable 𝑋 with
expectation value 𝜇 = E(𝑋)
ഥ𝑛 = 𝜇 ?
• Can we ensure that lim 𝑋
𝑛→∞
• What can we say about the error 𝜇 − 𝑋𝑛ҧ ?
• How can we generate 𝑋1 , . . . , 𝑋𝑛 ? (scope of Chapter 2)
Laws of large Numbers
Theorem (Weak law of large numbers)
Let 𝑋1 , 𝑋2 ,… be iid random variables with mean 𝜇, then for every 𝜀 > 0
lim 𝑃 ( 𝜇 − 𝑋ത𝑛 ) < 𝜀) = 1
𝑛→∞
→ ensures that the estimate converges to the correct value as the number of draws increases
Central Limit Theorem
• Let 𝑋1 , 𝑋2 ,… be iid random variables with mean 𝜇 and variance 𝜎 2 .
• Let 𝑁 be a standard Normal distributed random variable.
Then,
σ𝑛𝑖=1 𝑋𝑖 − 𝑛𝜇
→ 𝑁 𝑎𝑠 𝑛 → ∞
𝑛𝜎
→ provides information about the likely magnitude of the error in the estimate after a
finite number of draws
Statistical properties (1)
• For some random variable 𝑋 with distribution 𝑃 we want to find the 𝐸 𝑋
• We assume we know that 𝐸(𝑋) < ∞
• Generate 𝑛 independent and identically distributed random numbers 𝑋𝑖 and
introduce the Monte Carlo estimator 𝑋ത𝑛 as :
1
𝑋ത𝑛 ≔
𝑛
σ𝑛𝑖=1 𝑋𝑖
• The strong law of large numbers ensures that :
ഥ𝑛 = lim 1 𝑛
lim 𝑋 σ𝑖=1 𝑋𝑖 = 𝜇 𝑎. 𝑠.
𝑛→∞ 𝑛→∞ 𝑛
• The estimator 𝑋ത𝑛 is unbiased (difference between estimator and true value is zero)
and strongly consistent (with increasing number of points, the sequence of
estimates converges P-almost surely)
• What can we say about the error 𝑋ത𝑛 − 𝐸 𝑋 ?
Statistical properties (2)
• If also Var(𝑋) < ∞, we look at the mean-squared error (justified by the Central Limit
Theorem):
𝑛
1 Var(𝑋)
ത 2 ത
𝐸 (𝑋𝑛 − 𝐸(𝑋)) = Var(𝑋𝑛 ) = 2 Var(𝑋𝑖 ) =
𝑛 𝑛
𝑖=1
𝐶
→ error is bound by for 𝐶 ∈ ℝ+
𝑛
• In other words, the central limit theorem shows that, for large 𝑛, the sample mean
𝑋ത𝑛 has the approximate distribution
1
𝑋ത𝑛 = σ𝑛𝑖=1 𝑋𝑖 ~ 𝑁 𝐸 𝑋 , 1Τ𝑛 Var(𝑋)
𝑛
Thus,
𝑋ത𝑛 ≈ 𝐸 𝑋 + 𝑍, 𝑍 ~ 𝑁 0, 1ൗ𝑛 Var(𝑋)
Confidence intervals (1)
• An approximate (1 − 𝛼) confidence interval, 𝑙𝑏, 𝑢𝑏 , for the expectation 𝜇 can be
𝛼
found by solving the equation 𝑃 |𝑁| ≤ 𝑧1− 𝛼 = 1 − where
2 2
1 𝜎
𝑙𝑏 = σ𝑛𝑖=1 𝑋𝑖 − 𝑧1− 𝛼
𝑛 2 𝑛
1 𝜎
𝑢𝑏 = σ𝑛𝑖=1 𝑋𝑖 − 𝑧1− 𝛼
𝑛 2 𝑛
→ If we want to gain one digit of precision we have to use 100 times as many
simulation steps (“slow convergence”)
• Var 𝑋 is usually unknown. Using the sample variance, we also have an estimator
for the variance:
𝑛 𝑛
1 𝑛 1
Var 𝑋 ≈ ത 2
(𝑋𝑖 −𝑋𝑛 ) = 𝑋𝑖2 − 𝑋ത𝑛2
𝑛−1 𝑛−1 𝑛
𝑖=1 𝑖=1
Estimating the probability of an event (1)
• We want to estimate 𝑃(𝑋 ∈ 𝐴)
• 𝑋 could be the total claim amount in a year
• 𝐴 could be the event that the claim amount is bigger then 𝑢
1
Algorithm: Monte Carlo integration of 0 𝑥 𝑑𝑥
• Error bound: 𝑂 1
ൗ 1 , independent of 𝑑 → Monte Carlo Methods beat the curse of
𝑛2
dimensionality
Trapezoidal rule
• Dimension 𝑑 = 1 : Let ℎ be a function from ℝ → ℝ. We want to approximate 𝜇 ≔
1
0 ℎ 𝑠 𝑑𝑠 using the trapezoidal rule with 𝑛 + 1 nodes
1 𝑛
1 𝑘
න ℎ 𝑠 𝑑𝑠 ≈ ℎ
0 𝑛+1 𝑛
𝑘=0
→ Error bound: 𝑂 1
ൗ 2
𝑛𝑑
Observations
• The convergence rate of the CMC method for one dimensional integrals is of
order 𝑂 1ൗ 𝑛 ⟹ too slow
• The convergence rate of the simple trapezoidal rule is of order 𝑂 1Τ𝑛2 ⟹ faster
convergence rate than the CMC method.
→ But for higher dimensions, say d, the CMC method will turn out to be very
competitive
• In Monte Carlo integration the dimension d does not influence the error bound; it is
always 𝑂 1ൗ 𝑛