0% found this document useful (0 votes)
19 views

Chapter 1 - Basic Principles of Monte Carlo Methods

Uploaded by

Charbel Merhi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Chapter 1 - Basic Principles of Monte Carlo Methods

Uploaded by

Charbel Merhi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Simulation Methods in

Finance and Insurance


Lecturer: Maissa Tamraz

Master of Science (MSc) in Actuarial Science


Master of Science (MSc) in Finance
References

• Korn, Korn and Kroisandt, 2010, Monte Carlo Methods and Models in Finance and
Insurance
• Glasserman, 2003, Monte Carlo Methods in Financial Engineering
Evaluation
1. No final evaluation

2. The evaluation is based on a group project (group of 2 or 3 depending on the total number
of students taking this class)

3. The grading is based on the following deliverables :

• Written report

• R code

• Presentation (20 min : 15 min + 5 min questions)


Table of contents
1. Basic principles of Monte Carlo Methods

2. Random Number Generation

3. Variance Reduction Methods

4. Simulation of Stochastic Processes

5. Simulation of Risk Measures


Basic Principles of
Monte Carlo Methods
Monte Carlo Method - definition
• A method to approximate an expected value 𝐸 𝑍 by generating a large number of
independent realizations of 𝑍 and then using statistical methods (averaging) to get
an estimate for 𝐸 𝑍 .
• A method to approximate an (often high-dimensional) integral
• Goes back to Metropolis and Ulam (1949)
• Name indicates that a sort of “gambling” is used to obtain approximation procedure
• Basis: Strong Law of Large Numbers
• Applications, whenever expected values are to be determined, e.g.
➢Finance
➢Insurance
Explicit solutions vs simulation
Monte Carlo Method - Example
• Consider a quarter of a circle inscribed in a unit square.
• The circle and the square have a ratio of areas that is 𝜋Τ4
• The value of 𝜋 can be approximated using the Monte Carlo method

The question is: How do we approximate 𝝅 using the Monte Carlo


method?
Mathematical foundation : Starting
point
• Often, the quantity of interest can be expressed as an expected value 𝐸(𝑋) of a
(complicated) random variable 𝑋
• Pricing of derivatives securities
• Determine probabilities of events (𝑃(𝐴) = 𝐸 I𝐴 for indicator function I𝐴 )
• Etc.

• Idea:
• carry out a large number of independent experiments which all have the distribution of 𝑋
• calculate the arithmetic average of the obtained results
• approximate the expected value 𝐸(𝑋) by this average value
The Crude Monte Carlo Method
• Assume that we want to estimate 𝜇 and that there exists a random variable 𝑋 with
expectation value 𝜇 = E(𝑋)

Algorithm: Crude Monte Carlo (CMC)


1. Generate 𝑛 iid random variables 𝑋1 , . . . , 𝑋𝑛 which have the same probability
distribution as 𝑋
1
2. Approximate 𝜇 with the arithmetic mean 𝑋ത𝑛 = σ𝑛𝑖=1 𝑋𝑖
𝑛

ഥ𝑛 = 𝜇 ?
• Can we ensure that lim 𝑋
𝑛→∞
• What can we say about the error 𝜇 − 𝑋𝑛ҧ ?
• How can we generate 𝑋1 , . . . , 𝑋𝑛 ? (scope of Chapter 2)
Laws of large Numbers
Theorem (Weak law of large numbers)
Let 𝑋1 , 𝑋2 ,… be iid random variables with mean 𝜇, then for every 𝜀 > 0
lim 𝑃 ( 𝜇 − 𝑋ത𝑛 ) < 𝜀) = 1
𝑛→∞

Theorem (Strong law of large numbers)


Let 𝑋1 , 𝑋2 ,… be iid random variables with mean 𝜇, then
ഥ𝑛 = lim 1 𝑛
lim 𝑋 σ𝑖=1 𝑋𝑖 =𝜇 𝑎. 𝑠.
𝑛→∞ 𝑛→∞ 𝑛

→ ensures that the estimate converges to the correct value as the number of draws increases
Central Limit Theorem
• Let 𝑋1 , 𝑋2 ,… be iid random variables with mean 𝜇 and variance 𝜎 2 .
• Let 𝑁 be a standard Normal distributed random variable.

Then,
σ𝑛𝑖=1 𝑋𝑖 − 𝑛𝜇
→ 𝑁 𝑎𝑠 𝑛 → ∞
𝑛𝜎

→ provides information about the likely magnitude of the error in the estimate after a
finite number of draws
Statistical properties (1)
• For some random variable 𝑋 with distribution 𝑃 we want to find the 𝐸 𝑋
• We assume we know that 𝐸(𝑋) < ∞
• Generate 𝑛 independent and identically distributed random numbers 𝑋𝑖 and
introduce the Monte Carlo estimator 𝑋ത𝑛 as :
1
𝑋ത𝑛 ≔
𝑛
σ𝑛𝑖=1 𝑋𝑖
• The strong law of large numbers ensures that :
ഥ𝑛 = lim 1 𝑛
lim 𝑋 σ𝑖=1 𝑋𝑖 = 𝜇 𝑎. 𝑠.
𝑛→∞ 𝑛→∞ 𝑛
• The estimator 𝑋ത𝑛 is unbiased (difference between estimator and true value is zero)
and strongly consistent (with increasing number of points, the sequence of
estimates converges P-almost surely)
• What can we say about the error 𝑋ത𝑛 − 𝐸 𝑋 ?
Statistical properties (2)
• If also Var(𝑋) < ∞, we look at the mean-squared error (justified by the Central Limit
Theorem):
𝑛
1 Var(𝑋)
ത 2 ത
𝐸 (𝑋𝑛 − 𝐸(𝑋)) = Var(𝑋𝑛 ) = 2 ෍ Var(𝑋𝑖 ) =
𝑛 𝑛
𝑖=1
𝐶
→ error is bound by for 𝐶 ∈ ℝ+
𝑛

• In other words, the central limit theorem shows that, for large 𝑛, the sample mean
𝑋ത𝑛 has the approximate distribution
1
𝑋ത𝑛 = σ𝑛𝑖=1 𝑋𝑖 ~ 𝑁 𝐸 𝑋 , 1Τ𝑛 Var(𝑋)
𝑛

Thus,
𝑋ത𝑛 ≈ 𝐸 𝑋 + 𝑍, 𝑍 ~ 𝑁 0, 1ൗ𝑛 Var(𝑋)
Confidence intervals (1)
• An approximate (1 − 𝛼) confidence interval, 𝑙𝑏, 𝑢𝑏 , for the expectation 𝜇 can be
𝛼
found by solving the equation 𝑃 |𝑁| ≤ 𝑧1− 𝛼 = 1 − where
2 2

1 𝜎
𝑙𝑏 = σ𝑛𝑖=1 𝑋𝑖 − 𝑧1− 𝛼
𝑛 2 𝑛
1 𝜎
𝑢𝑏 = σ𝑛𝑖=1 𝑋𝑖 − 𝑧1− 𝛼
𝑛 2 𝑛

• A popular choice for an approximative symmetric 95% - quantile is given by 𝛼 =


0.05. The 𝑧97.5% - quantile of the standard normal distribution is about 1.96 as :
Φ−1 0.975 ≈ 1.96, Φ−1 0.025 ≈ −1.96
Confidence intervals (2)
• On “average” in 𝛼 · 100 out of 100 cases the true value is not contained in the
confidence interval

• The length of the confidence interval is 𝑂 1


ൗ 𝑛

→ If we want to gain one digit of precision we have to use 100 times as many
simulation steps (“slow convergence”)
• Var 𝑋 is usually unknown. Using the sample variance, we also have an estimator
for the variance:
𝑛 𝑛
1 𝑛 1
Var 𝑋 ≈ ത 2
෍ (𝑋𝑖 −𝑋𝑛 ) = ෍ 𝑋𝑖2 − 𝑋ത𝑛2
𝑛−1 𝑛−1 𝑛
𝑖=1 𝑖=1
Estimating the probability of an event (1)
• We want to estimate 𝑃(𝑋 ∈ 𝐴)
• 𝑋 could be the total claim amount in a year
• 𝐴 could be the event that the claim amount is bigger then 𝑢

• Define the indicator function

We use the estimator 𝑌: = I𝐴 (𝑋) with:


𝐸 (I𝐴 𝑋 ) = 𝑃(𝑋 ∈ 𝐴)
Var(I𝐴 𝑋 ) = 𝑃(𝑋 ∈ 𝐴)(1 − 𝑃(𝑋 ∈ 𝐴))
Note that σ𝑛𝑖=1 𝑌𝑖 is 𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙(𝑛, 𝑃(𝑋 ∈ 𝐴)) distributed.
Estimating the probability of an event (2)
• The probability of 𝐴 is 𝑃(𝑋 ∈ 𝐴) = 𝐸(I𝐴 (𝑋))
• The Monte Carlo estimate for 𝑃(𝑋 ∈ 𝐴) is simply the relative frequency of the
occurrence of 𝐴 in 𝑛 independent experiments:
𝑛
1
𝑃𝑛 𝐴 = ෍ I𝐴 (𝑋𝑖 )
𝑛
𝑖=1
• The estimator for the variance is
𝜎𝑛 = 𝑃𝑛 𝐴 × (1 − 𝑃𝑛 𝐴 )

• Approximation of the 95% − confidence interval for 𝑃(𝑋 ∈ 𝐴) is


1.96 1.96
𝑃𝑛 𝐴 − 𝜎𝑛 , 𝑃𝑛 𝐴 + 𝜎𝑛
𝑛 𝑛
Monte Carlo integration (1)
Let ℎ be a function from ℝ → ℝ. We want to approximate
1
𝜇 ≔ න ℎ 𝑠 𝑑𝑠
0

Let 𝑈 be 𝑈𝑛𝑖𝑓(0, 1) distributed with density function 𝑓 𝑥 = 1 for 𝑥 𝜖 (0,1). Define 𝑋 =


ℎ 𝑈 :
1 1
𝐸 𝑋 =𝐸 ℎ 𝑈 = ‫׬‬0 ℎ 𝑠 𝑓 𝑠 𝑑𝑠 = ‫׬‬0 ℎ 𝑠 𝑑𝑠
→ Use CMC method with 𝑿 = 𝒉 𝑼 to approximate 𝝁

Algorithm: Monte Carlo integration


1. Generate 𝑛 iid random variables 𝑈1 , . . . , 𝑈𝑛 uniformly distributed on (0,1)
1 1 𝑛
2. Estimate ‫׬‬0 ℎ 𝑠 𝑑𝑠 ≈ σ𝑖=1 ℎ( 𝑈𝑖 )
𝑛
Monte Carlo integration (2)
1
Example: Integrate ‫׬‬0 𝑥 𝑑𝑥 via the Monte Carlo Method

1
Algorithm: Monte Carlo integration of ‫׬‬0 𝑥 𝑑𝑥

1. Generate 𝑛 iid random variables 𝑈1 , . . . , 𝑈𝑛


uniformly distributed on (0,1)
1
2. Calculate σ𝑛𝑖=1 𝑈𝑖 as estimate
𝑛
Monte Carlo for high dimensional
integration
• Integrate
‫[׬‬0,1]𝑑 ℎ 𝑠1 , … , 𝑠𝑑 𝑑𝑠1 … 𝑑𝑠𝑑 = 𝐸 ℎ 𝐔 , 𝐔 = 𝑈1 , … , 𝑈 𝑑 ~ 𝑈𝑛𝑖𝑓 0,1

Algorithm: High dimensional integration via Monte Carlo


𝑗
1. Generate 𝑑 × 𝑛 iid random variables 𝑢𝑖 uniformly distributed on (0,1) (𝑗 =
1, … , 𝑑, 𝑖 = 1, … , 𝑛)
1 𝑛
2. Estimate ‫[׬‬0,1]𝑑 ℎ 𝑠1 , … , 𝑠𝑑 𝑑𝑠1 … 𝑑𝑠𝑑 = σ𝑖=1 ℎ(𝑢𝑖1 , … , 𝑢𝑖𝑑 )
𝑛

• Error bound: 𝑂 1
ൗ 1 , independent of 𝑑 → Monte Carlo Methods beat the curse of
𝑛2
dimensionality
Trapezoidal rule
• Dimension 𝑑 = 1 : Let ℎ be a function from ℝ → ℝ. We want to approximate 𝜇 ≔
1
‫׬‬0 ℎ 𝑠 𝑑𝑠 using the trapezoidal rule with 𝑛 + 1 nodes
1 𝑛
1 𝑘
න ℎ 𝑠 𝑑𝑠 ≈ ෍ℎ
0 𝑛+1 𝑛
𝑘=0

→ Error bound: 𝑂 1Τ𝑛2


• Dimension 𝑑 ≥ 2 : Let ℎ be a function from ℝ𝑑 → ℝ. We want to approximate 𝜇 ≔
‫[׬‬0,1]𝑑 ℎ 𝑠1 , … , 𝑠𝑑 𝑑𝑠1 … 𝑑𝑠𝑑 using the trapezoidal rule with (𝑛 + 1)𝑑 nodes
𝑛 𝑛
1 𝑘1 𝑘𝑑
න ℎ 𝑠1 , … , 𝑠𝑑 𝑑𝑠1 … 𝑑𝑠𝑑 ≈ 𝑑 ෍ … ෍ ℎ ,…,
[0,1]𝑑 (𝑛 + 1) 𝑛 𝑛
𝑘1 =0 𝑘𝑑 =0

→ Error bound: 𝑂 1
ൗ 2
𝑛𝑑
Observations
• The convergence rate of the CMC method for one dimensional integrals is of
order 𝑂 1ൗ 𝑛 ⟹ too slow

• The convergence rate of the simple trapezoidal rule is of order 𝑂 1Τ𝑛2 ⟹ faster
convergence rate than the CMC method.
→ But for higher dimensions, say d, the CMC method will turn out to be very
competitive
• In Monte Carlo integration the dimension d does not influence the error bound; it is
always 𝑂 1ൗ 𝑛

• The trapezoidal rule in higher dimensional integrals d is 𝑂 1


ൗ𝑛2ൗ𝑑

→ for d > 4 Monte Carlo is competitive


Monte Carlo for unbounded domain D
• If the integration domain 𝐷 is unbounded and suitable variable transformation is not
feasible, then :
ℎ(𝑠) ℎ(𝑋)
න ℎ 𝑠 𝑑𝑠 = න 𝑔 𝑠 𝑑𝑠 = 𝐸
𝐷 𝐷 𝑔(𝑠) 𝑔(𝑋)
where 𝑋 is a random variable with density function 𝑔 that has support exactly equal
to 𝐷

Algorithm: High dimensional integration in unbounded domain

1. Generate 𝑛 iid random variables 𝑋𝑖 (𝑖 = 1, … , 𝑛) with density function 𝑔


1 𝑛 ℎ(𝑋𝑖 )
2. Estimate ‫ 𝐷׬‬ℎ 𝑠 𝑑𝑠 ≈ σ𝑖=1
𝑛 𝑔(𝑋 )𝑖
Example
2
+∞ 2 exp(−𝑥 ൗ2)
Integrate ‫׬‬−∞ 𝑥 2𝜋
𝑑𝑥 via the Monte Carlo method

• Direct Monte Carlo estimate: generate 𝑛 independent random numbers 𝑧𝑖 ~𝑁 0,1


1
and calculate σ𝑛𝑖=1 𝑧𝑖2 as an estimate
𝑛

• Alternatively, first transform the integral

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy