0% found this document useful (0 votes)
10 views

CCCS314 - DAA - 22 - 23 - 3rd 06 Dynamic Programming

Uploaded by

engmanalf98
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

CCCS314 - DAA - 22 - 23 - 3rd 06 Dynamic Programming

Uploaded by

engmanalf98
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Topic 6

Dynamic Programming
Prof. Hanan Elazhary

Main source: A. Levitin, Introduction to the Design and Analysis of Algorithms, 3 rd edition
Dynamic Programming
Dynamic programming is a general algorithm design strategy
for solving problems defined by or formulated as recurrences with overlapping
subinstances remember that divide and conquer involves non-overlapping subinstances
Invented by American mathematician Richard Bellman in the 1950s to
solve optimization problems and later adapted by computer scientists
‘programming’ here means ‘planning’
Main ideas
set up a recurrence relating a solution to a larger instance to solutions of some
smaller instances
solve smaller instances once
record solutions in a table
extract solution to the original instance from that table
2
Dynamic Programming, Contd.
Examples of dynamic programming algorithms
Fibonacci Numbers
Computing a binomial coefficient
Longest common subsequence
Warshall’s algorithm (for transitive closure)
Floyd’s algorithm (for all-pairs shortest paths)
Constructing an optimal binary search tree
Some instances of difficult discrete optimization problems
traveling salesman problem
knapsack problem

3
Fibonacci Numbers
Fibonacci numbers is a famous sequence

defined by the simple recurrence

subject to initial conditions

Computing the nth Fibonacci number recursively (top-down)

4
Fibonacci Numbers, Contd.

Efficiency
let A(n) be the number of additions in computing F(n)
where

5
Fibonacci Numbers, Contd.
Computing the nth Fibonacci number using bottom-up approach while
recording results in a table

Efficiency
Time n
Space n

6
Computing a Binomial Coefficient
Binomial coefficients are coefficients of the binomial formula j
0 1 2 3 4 5
0 1
X values in the table 1 1 1
0 column diagonal 2 1 X 1
i
3 1 X X 1
Value of C(n,k) can be computed by filling a table
4 1 X X X 1
5 1 X X X X 1
0 1 2 3 4 5 0 1 2 3 4 5
0 1 n=2, k=1 0 1 n=3, k=1
What needs to be
1 1 1 1 1 1 computed is shown in
orange
2 1 X 1 2 1 X 1
What the algorithm
3 1 X X 1 3 1 X X 1 computes is highlighted

4 1 X X X 1 4 1 X X X 1
7
5 1 X X X X 1 5 1 X X X X 1
Computing a Binomial Coefficient, Contd.
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
0 1 n=3, k=2 0 1 n=4, k=1 0 1 n=4, k=2 0 1 n=4, k=3
1 1 1 1 1 1 1 1 1 1 1 1
2 1 X 1 2 1 X 1 2 1 X 1 2 1 X 1
3 1 X X 1 3 1 X X 1 3 1 X X 1 3 1 X X 1
4 1 X X X 1 4 1 X X X 1 4 1 X X X 1 4 1 X X X 1
5 1 X X X X 1 5 1 X X X X 1 5 1 X X X X 1 5 1 X X X X 1

0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
0 1 n=5, k=1 0 1 n=5, k=2 0 1 n=5, k=3 0 1 n=5, k=4
1 1 1 1 1 1 1 1 1 1 1 1
2 1 X 1 2 1 X 1 2 1 X 1 2 1 X 1
3 1 X X 1 3 1 X X 1 3 1 X X 1 3 1 X X 1
4 1 X X X 1 4 1 X X X 1 4 1 X X X 1 4 1 X X X 1
5 1 X X X X 1 5 1 X X X X 1 5 1 X X X X 1 5 1 X X X X 1
we notice that we can move along the i-dimension from 0 to n and along the j-dimension from 0 up to the diagonal 8
(i, i) but can stop at (i, k) when i is larger than k: equivalent to minimum of i and k
Computing a Binomial Coefficient, Contd.

move along the i-dimension up to n


move along the j-dimension up to the minimum of i and k

C(4, 3) → n = 4, k = 3

0 1 2 3 4 5
0 1
1 1 1
2 1 X 1
Efficiency 3 1 X X 1
Time Θ(nk) 4 1 X X X 1
Space Θ(nk) 5 1 X X X X 1
9
Knapsack Problem
Explanation of the problem
Given n items
weights w1 w2 … wn
values v1 v2 … vn
and a knapsack of capacity W
find most valuable subset of the items that fit into the knapsack and the
corresponding optimal value

To design a dynamic programming algorithm, we need to derive a


recurrence relation
that expresses a solution to an instance of the knapsack problem in terms of
solutions to its smaller subinstances
10
Knapsack Problem, Contd.
Our goal is to find
F(n, W), the optimal value of a subset of the n items that fit into a knapsack of
capacity W, and
the optimal subset itself

Let us derive a recurrence relation relating F(i, j) to its subsets


F(i, j) is the optimal value of a subset of the i items (1 ≤ i ≤ n) that fit into a
knapsack of capacity j (1 ≤ j ≤ W)
11
Knapsack Problem, Contd.
In terms of smaller subinstances, the optimal value F(i, j), in case item
i can fit in j (j ≥ wi), is equal to the maximum of
F(i-1, j) the optimal value for capacity j considering the first i-1 elements only
and excluding item i

j-w
j
w

vi + F(i-1, j-wi) the value of item i (of weight wi) plus the optimal value for the
remaining capacity j-wi considering the first i-1 elements
12
Knapsack Problem, Contd.
In case item i cannot fit in j (j < wi), F(i, j) is equal F(i-1, j)
The optimal value is 0 in case
there are no items or
there is no capacity

13
Knapsack Problem, Contd.
Example
capacity j

0 1 2 3 4 5

1
number of
items i
2

4 ?

14
Knapsack Problem, Contd.

Start with the initial conditions


0 1 2 3 4 5 0 1 2 3 4 5

0 0 0 0 0 0 0 0

1 1 0

2 2 0

3 3 0

4 ? 4 0

15
Knapsack Problem, Contd.

Mark cases where wi > j to assign to it the value above it


0 1 2 3 4 5
0 0 0 0 0 0 0
Item 1, w1 = 2, v1= 12 1 0 0
Item 2, w2 = 1, v2= 10 2 0
Item 3, w3 = 3, v3= 20 3 0 X X
Item 4, w4 = 2, v4= 15 4 0 X

16
Knapsack Problem, Contd.

For each other location (i, j) compute j - wi


0 1 2 3 4 5
0 0 0 0 0 0 0

Item 1, w1 = 2, v1= 12 1 0 0 j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3

Item 2, w2 = 1, v2= 10 2 0 j-w = 1-1 = 0 j-w = 2-1 = 1 j-w = 3-1 = 2 j-w = 4-1 = 3 j-w = 5-1 = 4

Item 3, w3 = 3, v3= 20 3 0 X X j-w = 3-3 = 0 j-w = 4-3 = 1 j-w = 5-3 = 2

4 0 X j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3


Item 4, w4 = 2, v4= 15

17
Knapsack Problem, Contd.
For each of these locations (i, j) the value is the maximum of the value above
it and that at location j - wi above it plus the value of the item in its row i
0 1 2 3 4 5
0 0 0 0 0 0 0

1 0 0 j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3


Item 1, w1 = 2, v1= 12 max(0,0+12) = 12 max(0,0+12) = 12 max(0,0+12) = 12 max(0,0+12) = 12

Item 2, w2 = 1, v2= 10 2 0 j-w = 1-1 = 0 j-w = 2-1 = 1 j-w = 3-1 = 2 j-w = 4-1 = 3 j-w = 5-1 = 4
max(0,0+10) = 10 max(12,0+10) = 12 max(12,12+10) = 22 max(12,12+10) = 22 max(12,12+10) = 22

Item 3, w3 = 3, v3= 20 3 0 10 12 j-w = 3-3 = 0 j-w = 4-3 = 1 j-w = 5-3 = 2


max(22,0+20) = 22 max(22,10+20) = 30 max(22,12+20) = 32

Item 4, w4 = 2, v4= 15 4 0 10 j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3
max(12,0+15) = 15 max(22,10+15) = 25 max(30,12+15) = 30 max(32,22+15) = 37

18
Knapsack Problem, Contd.

Get the optimal subset by backtracking


0 1 2 3 4 5
0 0 0 0 0 0 0

1 0 0 j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3


Item 1, w1 = 2, v1= 12 max(0,0+12) = 12 max(0,0+12) = 12 max(0,0+12) = 12 max(0,0+12) = 12

Item 2, w2 = 1, v2= 10 2 0 j-w = 1-1 = 0 j-w = 2-1 = 1 j-w = 3-1 = 2 j-w = 4-1 = 3 j-w = 5-1 = 4
max(0,0+10) = 10 max(12,0+10) = 12 max(12,12+10) = 22 max(12,12+10) = 22 max(12,12+10) = 22

Item 3, w3 = 3, v3= 20 3 0 10 12 j-w = 3-3 = 0 j-w = 4-3 = 1 j-w = 5-3 = 2


max(22,0+20) = 22 max(22,10+20) = 30 max(22,12+20) = 32

Item 4, w4 = 2, v4= 15 4 0 10 j-w = 2-2 = 0 j-w = 3-2 = 1 j-w = 4-2 = 2 j-w = 5-2 = 3
max(12,0+15) = 15 max(22,10+15) = 25 max(30,12+15) = 30 max(32,22+15) = 37

Solution: items {4, 2, 1}


Items weight (capacity) = 5
Optimal value = 37

19
Knapsack Problem, Contd.

P is used for path backtracking

Running time and


fill first row with zeros (no items)
space are in O(nW)

fill first column with zeros (no capacity)

If the value at location j - wi above it + the value of the item in its row i > value above it

P is recorded for path backtracking

an alternate way to approach the problem

20
Warshall’s Algorithm
The adjacency matrix of a graph with n vertices
is an n×n Boolean matrix A = {aij} with one row and one column for each of the
graph’s vertices, in which the element in the ith row and the jth column is equal
to 1 if there is an edge from the ith vertex to the jth vertex, and equal to 0
otherwise

21
Warshall’s Algorithm, Contd.
The transitive closure of a digraph is its reachability matrix
i

22
Warshall’s Algorithm, Contd.
Warshall’s algorithm starts with the adjacency matrix and computes
the transitive closure of a digraph (all nontrivial paths in the digraph)

3 3
1 1

4 0 0 1 0 4 0 0 1 0
2 2
1 0 0 1 1 1 1 1
0 0 0 0 0 0 0 0
0 1 0 0 1 1 1 1
adjacency transitive closure

23
Warshall’s Algorithm, Contd.
Warshall’s algorithm
starts with the adjacency matrix of a digraph (representing some paths
formed of one edge)
and in each iteration, considers one vertex (in order) and tries to use the existing
paths to find other paths connected through this vertex
Iteration 1: a is considered
Iteration 2: b is considered (a has already been considered in the existing paths)
Iteration 3: c is considered (a and b have already been considered in the existing paths)
………..
to eventually obtain the transitive closure

This is an application of dynamic programming since each iteration


exploits the results of the preceding iterations
24
Example
Column: we have a path from d to a
Row: we have a path from a to b
Can we use them to find other paths with a intermediate? Yes,
from d to b

Column: we have paths from a to b and from d to b


Row: we have a path from b to d
Can we use them to find other paths with b intermediate? (a is
already intermediate in some paths) Yes, from a to d and from d to d

Column: we have a path from d to c


Row: we have no paths from c
Can we use them to find other paths with c intermediate? (a and b
are already intermediate in some paths) No

Column: we have paths from a to d, from b to d and from d to d


Row: we have paths from d to a, b, c, and d
Can we use them to find other paths with d intermediate? (a, b, and
c are already intermediate in some paths) Yes, many

25
Example

3
1 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1
R(0) =
0 0 0 0 R(1) =
0 0 0 0
2 4
0 1 0 0 0 1 0 0

0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 1 1 1 1 1 1
R(2) =
0 0 0 0 R(3) =
0 0 0 0 R(4) =
0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1

26
Warshall’s Algorithm, Contd.
In fact, Warshall’s algorithm
processes a sequence of n-by-n matrices R(0), … , R(k), … , R(n)
where R(0) is A (adjacency matrix) and R(n) is T (transitive closure)
and R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the first k vertices
allowed as intermediate
3 3 3 3 3
1 1 1 1 1

4 4 4 2 4 4
2 2 2 2

R(0) R(1) R(2) R(3) R(4)


0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 27
Warshall’s Algorithm, Contd.
On the kth iteration, the algorithm determines for every pair of vertices
i, j if a path exists from i to j with the kth vertex (implicitly, the first k
vertices) as intermediate

k
i

j
Initial condition?

28
Warshall’s Algorithm, Contd.
Recurrence relating elements R(k) to elements of R(k-1) is

It implies the following rules for generating R(k) from R(k-1)

29
Warshall’s Algorithm, Contd.

Time efficiency: Θ(n3)


Space efficiency: matrices can be written over their predecessors
(with some care), so it’s Θ(n2)

30
Floyd’s Algorithm
A weight matrix of a weighted graph
is a modified adjacency matrix storing the weights of the edges
A distance matrix of a graph
is a matrix containing the shortest distances between pairs of vertices

31
Floyd’s Algorithm, Contd.
Floyd’s algorithm starts with the weight matrix of a weighted graph
and computes the distance matrix of the graph (the shortest distances
between pairs of vertices)

4 3
1
6
1 0 ∞ 4 ∞
1 5 1 0 4 3
∞ ∞ 0 ∞
4 6 5 1 0
2 3

weight distance

32
Floyd’s Algorithm, Contd.
Floyd’s algorithm
starts with the weight matrix of a weighted graph (weights of trivial paths)
and in each iteration, considers one vertex (in order) and tries to use the existing
paths to find shorter graph paths connected through this vertex
Iteration 1: a is considered
Iteration 2: b is considered (a has already been considered in the existing paths)
Iteration 3: c is considered (a and b have already been considered in the existing paths)
………..
to eventually obtain the distance matrix

This is an application of dynamic programming since each iteration


exploits the results of the preceding iterations

33
Example
Column: distance 6 from d to a and distance 2 from b to a
Row: distance 3 from a to c
Can we use them to find shorter graph distances with a
intermediate? Yes, 5 from b to c and 9 from d to c (instead of ∞)
Column: distance 7 from c to b
Row: distance 2 from b to a and distance 5 from b to c
Can we use them to find shorter graph distances with b
intermediate? (a is already intermediate in some paths) Yes, 9 from c to a
(instead of ∞)
Column: three distances from a to c, from b to c, and from d to c
Row: three distances from c to a, from c to b, and from c to d
Can we use them to find shorter graph distances with c
intermediate? (a & b are already intermediate in some paths) Yes, four

34
Floyd’s Algorithm, Contd.
In fact, Floyd’s algorithm
processes a sequence of n-by-n matrices D(0), … , D(k), … , D(n)
where D(0) is W (weight matrix) and D(n) is D (distance matrix)
and D(k)[i,j] = minimum distance between i and j with only the first k vertices
allowed as intermediate (increasing subsets of the vertices allowed as intermediate)

35
Warshall’s Algorithm, Contd.
On the kth iteration, the algorithm determines shortest paths between
every pair of vertices i, j with the kth vertex (implicitly, the first k
vertices) as intermediate

D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]
j
Initial condition?

36
Floyd’s Algorithm, Contd.

If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k (for backtracking of shortest paths themselves)
Time efficiency: Θ(n3)
Space efficiency: matrices can be written over their predecessors
(with some care), so it’s Θ(n2)

Shortest paths themselves can be found, too, how?


It can work on graphs with negative edges but without negative cycles
37
https://medium.com/100-days-of-algorithms/day-65-floyd-warshall-2d10a6d6c49d
Questions?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy