0% found this document useful (0 votes)
110 views

Greedy Methods: Manoj Kumar DTU, Delhi

The document discusses greedy algorithms and dynamic programming. It explains that greedy algorithms make locally optimal choices at each step to find a global optimum, while dynamic programming solves problems by breaking them into subproblems and storing solutions rather than recomputing them. Several examples are provided, including the knapsack problem, to illustrate how greedy algorithms and dynamic programming approaches work and when each is applicable.

Uploaded by

Ankit Priyarup
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Greedy Methods: Manoj Kumar DTU, Delhi

The document discusses greedy algorithms and dynamic programming. It explains that greedy algorithms make locally optimal choices at each step to find a global optimum, while dynamic programming solves problems by breaking them into subproblems and storing solutions rather than recomputing them. Several examples are provided, including the knapsack problem, to illustrate how greedy algorithms and dynamic programming approaches work and when each is applicable.

Uploaded by

Ankit Priyarup
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Greedy Methods

Manoj Kumar
DTU, Delhi
Greedy algorithms
• A greedy algorithm is an algorithm that follows
the problem solving heuristic of making the locally
optimal choice at each stage with the hope of finding
a global optimum.
Optimization problems
• An optimization problem is one in which you want to
find, not just a solution, but the best solution
• A “greedy algorithm” sometimes works well for
optimization problems.
• A greedy algorithm works in phases. At each phase:
▫ You take the best you can get right now, without regard
for future consequences
▫ You hope that by choosing a local optimum at each
step, you will end up at a global optimum.
Greedy algorithms have five pillars
• A candidate set, from which a solution is created.
• A selection function, which chooses the best
candidate to be added to the solution.
• A feasibility function, that is used to determine if a
candidate can be used to contribute to a solution.
• An objective function, which assigns a value to a
solution, or a partial solution, and
• A solution function, which will indicate when we
have discovered a complete solution.
Example: Making Changes
• Suppose you want to make changes of a certain
amount of money, using the fewest possible notes and
coins
• A greedy algorithm would do this would be:
At each step, take the largest possible notes or coin
that does not overshoot
▫ Example: To make 758, you can choose:
a 500 rupees note
two 100 rupees notes,
a 50 rupees note,
a 5 rupees coin,
a 2 rupee coin
a 1 rupee coin
• For money, the greedy algorithm always gives the
optimum solution
The Knapsack Problem
• The famous knapsack problem:
▫ A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has a
good eye for the value of these objects, and knows that
each will fetch hundreds or thousands of dollars on the
clandestine art collector’s market. But, the thief has
only brought a single knapsack to the scene of the
robbery, and can take away only what he can carry.
What items should the thief take to maximize the haul?
The Knapsack Problem
• More formally, the 0-1 knapsack problem:
▫ The thief must choose among n items, where the ith
item worth bi dollars and weighs wi pounds
▫ Carrying at most W pounds, maximize value
Note: assume bi, wi, and W are all integers
each item must be taken or left in entirety.
• A variation, the fractional knapsack problem:
▫ Thief can take fractions of items
▫ Think of items in 0-1 problem as gold ingots, in
fractional problem as buckets of gold dust
Fractional Knapsack problem
• Given: A set S of n items, with each item i having
▫ bi - a positive benefit
▫ wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
• If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
▫ In this case, we let xi denote the amount we take of item i

▫ Objective: maximize ∑b (x / w )
i∈S
i i i

▫ Constraint: ∑x
i∈S
i ≤W
Fractional Knapsack: Example
• Given: A set S of n items, with each item i having
▫ bi - a positive benefit
▫ wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”

Solution:
• 1 ml of 5
Items:
1 2 3 4 5
• 2 ml of 3
• 6 ml of 4
Weight: 4 ml 8 ml 2 ml 6 ml 1 ml • 1 ml of 2
Benefit: $12 $32 $40 $30 $50 10 ml
Value: 3 4 20 5 50
($ per ml)
9
Fractional Knapsack problem
• Greedy choice: Keep taking
item with highest value Algorithm fractionalKnapsack(S, W)
(benefit to weight ratio) Input: set S of items w/ benefit bi
▫ Use a heap-based priority and weight wi; max. weight W
queue to store the items, then Output: amount xi of each item i
the time complexity is O(n log to maximize benefit with
n). weight at most W
• Correctness: Suppose there for each item i in S
is a better solution xi ← 0
▫ there is an item i with higher vi ← bi / wi {value}
value than a chosen item j (i.e., w←0 {current total weight}
vj<vi) , if we replace some j while w < W
with i, we get a better solution
remove item i with highest vi
▫ Thus, there is no better
solution than the greedy one xi ← min{wi , W − w}
w ← w + min{wi , W − w}
The Knapsack Problem
• The optimal solution to the fractional knapsack
problem can be found with a greedy algorithm.
• The optimal solution to the 0-1 Knapsack problem
cannot be found with the same greedy strategy
▫ Greedy strategy: take in order of dollars/pound
▫ Example: 3 items weighing 10, 20, and 30 pounds,
knapsack can hold 50 pounds
Suppose item 1 is worth $ 75, item 2 is worth $100 and
item 3 is worth $200.
According to greedy algorithm, items 1 and 2 are selected
with total value = $175
But best selection could be item 2 and 3 with total value =
$300.
Shortest paths on a special graph
• Problem: Find a shortest path from v0 to v3.
• The greedy method can solve this problem.
• The shortest path: 1 + 2 + 4 = 7.
Shortest paths on a multi-stage graph
• Problem: Find a shortest path from v0 to v3 in the multi-
stage graph.

• Greedy method: v0v1,2v2,1v3 = 23


• Optimal: v0v1,1v2,2v3 = 7
• The greedy method does not work.
Other Greedy Algorithms
• MST algorithms
▫ Kruskal’s and Prim’s Algorithms
• Single Source Shortest Path Algorithm.
▫ Dijkstra Algorithm
• Huffman Coding
Dynamic Programming
Manoj Kumar
DTU, Delhi
Dynamic Programming
• Dynamic Programming is an algorithm design
technique for optimization problems: often
minimizing or maximizing.
• Like divide and conquer, DP solves problems by
combining solutions to subproblems.
• Unlike divide and conquer, subproblems are not
independent.
▫ Subproblems may share subsubproblems,
▫ However, solution to one subproblem may not affect the
solutions to other subproblems of the same problem. (More
on this later.)
D.ynamic Programming..
• DP reduces computation by
▫ Solving subproblems in a bottom-up fashion.
▫ Storing solution to a subproblem the first time it is solved.
▫ Looking up the solution when subproblem is encountered
again.
• Key: determine structure of optimal solutions
Steps in Dynamic Programming
1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-down
with caching or bottom-up in a table.
4. Construct an optimal solution from computed
values.
We’ll study these with the help of examples.
0/1 Knapsack
• Problem statement:
▫ A thief robbing a store and can carry a maximal weight
of W into their knapsack. There are n items and ith item
weigh wi and is worth vi dollars. What items should
thief take?
• Exhibit No greedy choice property.
▫ No greedy algorithm exists.
• Exhibit optimal substructure property.
• Only dynamic programming algorithm exists.
0/1 Knapsack Problem: Formal description

(of items to take)


0/1 Knapsack Problem
0/1 Knapsack: Solution
• Let i be the highest-numbered item in an optimal
solution S for W pounds. Then S’ = S - {i} is an
optimal solution for W - wi pounds and the value to
the solution S is Vi plus the value of the subproblem.
• We can express this fact in the following formula:
define c[i, w] to be the solution for items 1,2, . . . , i
and maximum weight w. Then
0 if i = 0 or w = 0
c[i,w] = c[i-1, w ] if w<wi
max [vi + c[i-1, w-wi], c[i-1, w]} if i>0 and
w ≥ wi
0/1 Knapsack: Solution
• This says that the value of the solution to i items either
include ith item,
▫ in which case it is vi plus a subproblem solution for (i - 1)
items and the weight excluding wi, or
▫ does not include ith item, in which case it is a subproblem's
solution for (i - 1) items and the same weight.
• That is, if the thief picks item i, thief takes vi value, and
thief can choose from items W - wi, and get c[i - 1, W -
wi] additional value.
• On other hand, if thief decides not to take item i, thief can
choose from item 1,2, . . . , i- 1 upto the weight limit w,
and get c[i - 1, w] value. The better of these two choices
should be made.
0/1 Knapsack: Solution
• The algorithm takes as input the
▫ maximum weight W,
▫ the number of items n,
▫ and the two sequences v = <v1, v2, . . . , vn> and
w = <w1, w2, . . . , wn>.
• It stores the c[i, j] values in the table, that is, a two
dimensional array, c[0 . . n, 0 . . w] whose entries are
computed in a row-major order.
• That is, the first row of c is filled in from left to right,
then the second row, and so on.
• At the end of the computation, c[n, w] contains the
maximum value that can be picked into the knapsack.

0/1 Knapsack: Algorithm
Dynamic-0-1-knapsack (v, w, n, W)
for w 0 to W
do c[0, w] 0
for i 1 to n
do c[i, 0] 0
for w 1 to W
do if wi ≤ w
then if vi + c[i-1, w-wi] > c[i-1, w]
then c[i, w] vi + c[i-1, w-wi]
else c[i, w] c[i-1, w]
else c[i, w] = c[i-1, w]

• The set of items to take can be deduced from the table, starting
at c[n. w] and tracing backwards where the optimal values
came from. If c[i, w] = c[i-1, w] item i is not part of the
solution, and we are continue tracing with c[i-1, w]. Otherwise
item i is part of the solution, and we continue tracing with c[i-
1, w-W].
Analysis
• This dynamic-0-1-kanpsack algorithm takes θ(nw)
times, broken up as follows: θ(nw) times to fill the c-
table, which has (n +1).(w +1) entries, each requiring
θ(1) time to compute. O(n) time to trace the solution,
because the tracing process starts in row n of the table
and moves up 1 row at each step.
c[i,w]
c[i,w]
• The algorithm computing c[i,w] does not keep record
of which subset of items gives the optimal solution.
• To compute the actual subset, we can add an auxiliary
boolean array keep[i,w] which is 1 if we decide to
take the ith item in c[i, w] and 0 otherwise.
Dynamic-0-1-knapsack (v, w, n, W)
for w 0 to W
do c[0, w] 0
for i 1 to n
do c[i, 0] 0
for w 1 to W
do if wi ≤ w
then if vi + c[i-1, w-wi] > c[i-1, w]
then c[i, w] vi + c[i-1, w-wi]
keep[i,w] 1
else c[i, w] c[i-1, w]
keep[i,w] 0
else c[i, w] = c[i-1, w]
keep[i,w] 0
Constructing the Optimal Solution
Question:
• How do we use values in keep[i, w] to determine the
subset T of items having the maximum value?
• If keep[n,W] is 1, then n∈T, We can repeat this
argument for keep[n-1,W-wn].
• If keep[n,W] is 0, then n∉T, We can repeat this
argument for keep[n-1,W].
• Therefore te following part of the program will
output the elements of T.

K W
for i n downto 1
do if (keep[i,K] ==1)
then print i
K K-wi
Complete Algorithm
Dynamic-0-1-knapsack (v, w, n, W)
for w 0 to W
do c[0, w] 0
for i 1 to n
do c[i, 0] 0
for w 1 to W
do if wi ≤ w
then if vi + c[i-1, w-wi] > c[i-1, w]
then c[i, w] vi + c[i-1, w-wi]
keep[i,w] 1
else c[i, w] c[i-1, w]
keep[i,w] 0
else c[i, w] = c[i-1, w]
keep[i,w] 0

K W
for i n downto 1
do if (keep[i,K] ==1)
then print i
K K-wi
Example

c[i,w]

keep[i,w]
i= 0 0 0 0 0 0 0 0 0 0 0 0 Solution
1 0 0 0 0 0 1 1 1 1 1 1 T={2,4}
2 0 0 0 0 1 1 1 1 1 1 1
3 0 0 0 0 0 0 0 0 0 0 1
4 0 0 0 1 1 1 1 1 1 1 1
Longest increasing subsequence(LIS)

• The longest increasing subsequence is to find a


longest increasing subsequence of a given
sequence of distinct integers a1a2…an .
e.g. 9 2 5 3 7 11 8 10 13 6
2 3 7
are increasing subsequences.
5 7 10 13
We want to find a longest
9 7 11 one.
are not increasing
3 5 11 13 subsequences.
35
A naive approach for LIS
• Let L[i] be the length of a longest increasing
subsequence ending at position i.
L[i] = 1 + max j = 0..i-1{L[j] | aj < ai}
(use a dummy a0 = minimum, and
L[0]=0) L(5)

L(1) L(2) L(3) L(4)

9 2 5 3 7 11 8 10 13 6
L(1) L(2) L(3)
L[i] 1 1 2 2 3 4 ?

36
A naive approach for LIS

L[i] = 1 + max j = 0..i-1 {L[j] | aj < ai}

9 2 5 3 7 11 8 10 13 6
L[i] 1 1 2 2 3 4 4 5 6 3

The maximum length

The subsequence 2, 3, 7, 8, 10, 13 is a longest


increasing subsequence.
This method runs in O(n2) time.

37
int maxLength = 1,
bestEnd = 0;
L[0] = 1;
prev[0] = -1;
for (int i = 1; i < N; i++)
{
L[i] = 1;
prev[i] = -1;
for (int j = i - 1; j >= 0; j--)
if (L[j] + 1 > L[i] && array[j] < array[i])
{
L[i] = L[j] + 1;
prev[i] = j;
}
if (L[i] > maxLength)
{
bestEnd = i;
maxLength = L[i];
}
}
Binary search

• Given an ordered sequence x1x2 ... xn, where


x1<x2< ... <xn, and a number y, a binary search
finds the largest xi such that xi< y in O(log n) time.

n/

n
2 n/4
...
Binary search

• How many steps would a binary search reduce the


problem size to 1?
n n/2 n/4 n/8 n/16 ... 1

How many steps? O(log n)


steps.
An O(n log n) method for LIS

• Define BestEnd[k] to be the smallest number of an


increasing subsequence of length k.
9 2 5 3 7 11 8 10 13 6
9 2 2 2 2 2 2 2 2 BestEnd[1]

5 3 3 3 3 3 3 BestEnd[2]

7 7 7 7 7 BestEnd[3]

11 8 8 8 BestEnd[4]
For each position, we perform
a binary search to update 10 10 BestEnd[5]
BestEnd. Therefore, the
13 BestEnd[6]
running time is O(n log n).
Let S[pos] be defined as the smallest integer that ends an
increasing sequence of length pos.
Now iterate through every integer X of the input set and do
the following:
1.If X > last element in S, then append X to the end of S. This
essentialy means we have found a new largest LIS.
2.Otherwise find the smallest element in S, which
is >= than X, and change it to X. Because Sis sorted at any
time, the element can be found using binary search
in log(N).
Total runtime - N integers and a binary search for each of
them - N * log(N) = O(N log N)
Now let's do a real example:
Set of integers: 2 6 3 4 1 2 9 5 8
Steps:
0. S = {} - Initialize S to the empty set
1. S = {2} - New largest LIS
2. S = {2, 6} - New largest LIS
3. S = {2, 3} - Changed 6 to 3
4. S = {2, 3, 4} - New largest LIS
5. S = {1, 3, 4} - Changed 2 to 1
6. S = {1, 2, 4} - Changed 3 to 2
7. S = {1, 2, 4, 9} - New largest LIS
8. S = {1, 2, 4, 5} - Changed 9 to 5
9. S = {1, 2, 4, 5, 8} - New largest LIS
So the length of the LIS is 5 (the size of S).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy