Daa Unit III
Daa Unit III
Shumama Ansa
Dynamic Programming:
General Method
• Dynamic programming is a name, coined by Richard Bellman in
1955.
• Dynamic programming, is a powerful algorithm design technique
that can be used when the solution to the problem can be viewed
as the result of a sequence of decisions.
• In the greedy method we make irrevocable decisions one at a time,
using a greedy criterion.
• However, in dynamic programming we examine the decision
sequence to see whether an optimal decision sequence contains
optimal decision subsequence.
• When optimal decision sequences contain optimal
decision sub sequences, we can establish recurrence
equations, called dynamic-programming recurrence
equations that enable us to solve the problem in an
efficient way.
• Dynamic programming is based on the principle of
optimality (also coined by Bellman).
• The principle of optimality states that no matter
whatever the initial state and initial decision are, the
remaining decision sequence must constitute an optimal
decision sequence with regard to the state resulting from
the first decision.
• The principle implies that an optimal decision
sequence is comprised of optimal decision
subsequences.
• Since the principle of optimality may not hold
for some formulations of some problems, it is
necessary to verify that it does hold for the
problem being solved.
• Dynamic programming cannot be applied
when this principle does not hold.
The steps in a dynamic programming solution are:
• Verify that the principle of optimality holds. Set up the
dynamic- programming recurrence equations. Solve the
dynamic- programming recurrence equations for the
value of the optimal solution. Perform a trace back step
in which the solution itself is constructed.
•
• Optimal solutions to sub-problems are retained
in a table, thereby avoiding the work of re
computing the answer every time a sub-
problem is encountered
All pairs shortest path problem
A0 1 2 3
1 0 4 11
2 6 0 2
3 3 ∞ 0
• Considering vertex 1 as intermediate vertex
• So 1st row and 1st column do not change
• A1 (2, 3) = min {(A0 (2, 3)),A0(2,1)+A0(1,3))
=min(2,6+11)
=min(2,17)=2
• A1 (3, 2) = min {(A0 (3,2)),A0(3,1)+A0(1,2))
=min(∞,3+4) A0 1 2 3
=min(∞,7) 1 0 4 11
=7 2 6 0 2
3 3 ∞ 0
A0 1 2 3
1 0 4 11
2 6 0 2 A1 1 2 3
3 3 ∞ 0 1 0 4 11
2 6 0 2
3 3 7 0
• Considering vertex 2 as intermediate vertex
• So 2nd row and 2nd column do not change
• A2 (1, 3) = min {(A1 (1, 3)),A1(1,2)+A1(2,3))
=min(11,4+2)
=min(11,6)=6
• A2 (3, 1) = min {(A1 (3,1)),A1(3,2)+A1(2,3))
=min(3,7+2) A1 1 2 3
=min(3,9) 1 0 4 11
=3 2 6 0 2
3 3 7 0
A1 1 2 3
1 0 4 11
2 6 0 2
3 3 7 0 A2 1 2 3
1 0 4 6
2 6 0 2
3 3 7 0
• Considering vertex 3 as intermediate vertex
• So 3rd row and 3rd column do not change
• A3 (1, 2) = min {(A2 (1, 2)),A2(1,3)+A2(3,2))
=min(4,6+7)
=min(4,13)=4
• A3 (2, 1) = min {(A2 (2,1)),A2(2,3)+A2(3,1))
=min(6,2+3) A2 1 2 3
=min(6,5) 1 0 4 6
=5 2 6 0 2
3 3 7 0
A2 1 2 3
1 0 4 6
2 6 0 2
3 3 7 0
A3 1 2 3
1 0 4 6
2 5 0 2
3 3 7 0
A0 1 2 3 A1 1 2 3
1 0 4 11 1 0 4 11
2 6 0 2 2 6 0 2
3 3 ∞ 0 3 3 7 0
A2 1 2 3 A3 1 2 3
1 0 4 6 1 0 4 6
2 6 0 2 2 5 0 2
3 3 7 0 3 3 7 0
Optimal binary search trees
• A binary search tree (BST), also called an ordered or
sorted binary tree, is a rooted binary tree whose
internal nodes store a key greater than all the keys in
the node's left sub tree and less than those in its right
sub tree.
• Time taken to search is O(log n)
BST with 3 nodes
• Number of binary search trees with 3 nodes =
5
• Number of binary search trees with 4 nodes =
14
• Number of binary search trees with n nodes =
(2ncn)/(n+1)
• Given a fixed set of identifiers, we wish to
create a binary search tree.
• Different binary search trees may exist for the
same identifier set to have different
performance characteristics
• The tree of figure5.12(a) in the worst case,
requires four comparisons to find an identifier,
where as the tree of figure5.12(b) requires only
three.
• On the average two trees need12/5and
11/5comparisons respectively.
• for example in the case of tree(a), it
takes1,2,2,3,and 4 comparisons respectively, to
find the identifiers for, do, while, int, and if.
• Thus the average number of comparisons is
(1+2+2+3+4)/5 = 12/5.
• This calculation assumes that each identifier is
searched for with equal probability and that no
unsuccessful searches(i.e., searches for identifiers
not in the tree) are made.
• Let us assume that the given set of identifiers
is {a1, . . . , an} with a1 < a2 < . . . . < an.
• Let p (i) be the probability with which we
search for ai.
• Let q (i) be the probability that the identifier x
being searched for is such that ai < x < ai+1, 0 <i
<n. (assume a0 = -∞ and an+1 = +∞ ).
• Then,∑o≤i≤nq(i) is the probability of an unsuccessful
search.
• ∑1 ≤ i ≤ nP(i)+ ∑0 ≤ i ≤ nq(i ) = 1
• Given this data, we wish to construct an optimal binary
search Tree.
• In obtaining a cost function for binary search trees, it is
useful to add a fictitious node in place of every empty
sub tree in the search tree.
• Such nodes, called external nodes, are drawn square in
figure5.13
• All other nodes are internal nodes.
• If a binary search tree represents n identifiers,
then there will be exactly n internal nodes and
n +1(fictitious) external nodes.
• Every internal node represents a point where
a successful search may terminate.
• Every external node represents a point where
an unsuccessful search may terminate.
• If a successful search terminates at an
internal node at level I, then the expected cost
contribution from the internal node for ai is
p(i) * level(ai).
• The identifiers not in the binary search tree
can be partitioned into n + 1 equivalence
classes Ei, 0 ≤ i ≤ n.
• If the failure node for Ei is at level l, then the cost
contribution of this node is q(i) * (level(Ei)-1).
• The preceding discussion leads to the following
formula for the expected cost of a binary search
tree:
∑1 ≤ i ≤ n p(i) * level(ai)+ ∑0 ≤ i ≤ n q{i)* (level(Ei)-1)
• We define an optimal binary search tree for the
identifier set {a1,a2,.,.an} to be a binary search
tree for which the given formula is minimum.
• Example1:
Let n = 4, and (a1, a2, a3, a4) = (do, if, int, while)
Let P (1: 4) = (3, 3, 1, 1) and Q (0: 4) = (2, 3, 1,
1,1)
Initially w(i,i) = q[i] , c(i,i) = 0 and r(i,i) = 0
• W(0,0) = 2, w(1,1) =3, w(2,2) = 1, w(3,3) = 1,
w(4,4) = 1
• c(0,0) = 0, c(1,1) =0, c(2,2) = 0, c(3,3) = 0, c(4,4)
=0
• r(0,0) = 0, r(1,1) =0, r(2,2) = 0, r(3,3) = 0, r(4,4) = 0
• c (i, j) = min i<k<j {c (i, k-1) + c (k, j)} +w (i,j)
• Where w (i, j) = P (j) + Q (j) +w (i,j-1)
• And r(i, j) = k
• first, computing all C (i, j) such that |j – i| = 1; j = i + 1
and as 0 <i < 4; i = 0, 1, 2 and 3; i < k ≤ J.
• Start with i = 0; so j = 1; as i < k ≤ j, so the possible
value for k =1
W (0, 1) = P (1) + Q (1) + W (0, 0) = 3 + 3 + 2 =8
C (0, 1) = W (0, 1) + min {C (0, 0) + C (1, 1)} =8
R (0, 1) = 1 (value of 'K' that is minimum in the above
equation).
• Next with i = 1; so j = 2; as i < k ≤ j, so the possible value for
k =2
W (1, 2) = P (2) + Q (2) + W (1, 1) = 3 + 1 + 3 =7
C (1, 2) = W (1, 2) + min {C (1, 1) + C (2, 2)} =7
R (1, 2) =2
• Next with i = 2; so j = 3; as i < k ≤ j, so the possible
value for k =3
W (2, 3) = P (3) + Q (3) + W (2, 2) = 1 + 1 + 1 =3
C (2, 3) = W (2, 3) + min {C (2, 2) + C (3, 3)} =3
R (2, 3) =3
0 1 2 3 4 5 6 7 8
Pi Wi 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3
6 5 4
0 1 2 3 4 5 6 7 8
Pi Wi 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4
0 1 2 3 4 5 6 7 8
Pi Wi 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8
= max{6,6}=6 0 0 0 0 0 0 0 0 0 0
1 0 0 1 1 1 1 1 1 1
For j=7, 2 0 0 1 2 2 3 3 3 3
T(4,7) = max{p(4) +T(3,2), T(3,7)}
3 0 0 1 2 5 5 6 7 7
= max{7,7}=7
4 0 0 1 2 5 6 6 7 8
For j=8,
T(4,8) = max{p(4) +T(3,3), T(3,8)}
= max{8,7}=8
0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0
1 0 0 1 1 1 1 1 1 1
2 0 0 1 2 2 3 3 3 3
3 0 0 1 2 5 5 6 7 7
4 0 0 1 2 5 6 6 7 8