Greedy Methods: Manoj Kumar DTU, Delhi
Greedy Methods: Manoj Kumar DTU, Delhi
Manoj Kumar
DTU, Delhi
Greedy algorithms
• A greedy algorithm is an algorithm that follows
the problem solving heuristic of making the locally
optimal choice at each stage with the hope of finding
a global optimum.
Optimization problems
• An optimization problem is one in which you want to
find, not just a solution, but the best solution
• A “greedy algorithm” sometimes works well for
optimization problems.
• A greedy algorithm works in phases. At each phase:
▫ You take the best you can get right now, without regard
for future consequences
▫ You hope that by choosing a local optimum at each
step, you will end up at a global optimum.
Greedy algorithms have five pillars
• A candidate set, from which a solution is created.
• A selection function, which chooses the best
candidate to be added to the solution.
• A feasibility function, that is used to determine if a
candidate can be used to contribute to a solution.
• An objective function, which assigns a value to a
solution, or a partial solution, and
• A solution function, which will indicate when we
have discovered a complete solution.
Example: Making Changes
• Suppose you want to make changes of a certain
amount of money, using the fewest possible notes and
coins
• A greedy algorithm would do this would be:
At each step, take the largest possible notes or coin
that does not overshoot
▫ Example: To make 758, you can choose:
a 500 rupees note
two 100 rupees notes,
a 50 rupees note,
a 5 rupees coin,
a 2 rupee coin
a 1 rupee coin
• For money, the greedy algorithm always gives the
optimum solution
The Knapsack Problem
• The famous knapsack problem:
▫ A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has a
good eye for the value of these objects, and knows that
each will fetch hundreds or thousands of dollars on the
clandestine art collector’s market. But, the thief has
only brought a single knapsack to the scene of the
robbery, and can take away only what he can carry.
What items should the thief take to maximize the haul?
The Knapsack Problem
• More formally, the 0-1 knapsack problem:
▫ The thief must choose among n items, where the ith
item worth bi dollars and weighs wi pounds
▫ Carrying at most W pounds, maximize value
Note: assume bi, wi, and W are all integers
each item must be taken or left in entirety.
• A variation, the fractional knapsack problem:
▫ Thief can take fractions of items
▫ Think of items in 0-1 problem as gold ingots, in
fractional problem as buckets of gold dust
Fractional Knapsack problem
• Given: A set S of n items, with each item i having
▫ bi - a positive benefit
▫ wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
• If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
▫ In this case, we let xi denote the amount we take of item i
▫ Objective: maximize ∑b (x / w )
i∈S
i i i
▫ Constraint: ∑x
i∈S
i ≤W
Fractional Knapsack: Example
• Given: A set S of n items, with each item i having
▫ bi - a positive benefit
▫ wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Solution:
• 1 ml of 5
Items:
1 2 3 4 5
• 2 ml of 3
• 6 ml of 4
Weight: 4 ml 8 ml 2 ml 6 ml 1 ml • 1 ml of 2
Benefit: $12 $32 $40 $30 $50 10 ml
Value: 3 4 20 5 50
($ per ml)
9
Fractional Knapsack problem
• Greedy choice: Keep taking
item with highest value Algorithm fractionalKnapsack(S, W)
(benefit to weight ratio) Input: set S of items w/ benefit bi
▫ Use a heap-based priority and weight wi; max. weight W
queue to store the items, then Output: amount xi of each item i
the time complexity is O(n log to maximize benefit with
n). weight at most W
• Correctness: Suppose there for each item i in S
is a better solution xi ← 0
▫ there is an item i with higher vi ← bi / wi {value}
value than a chosen item j (i.e., w←0 {current total weight}
vj<vi) , if we replace some j while w < W
with i, we get a better solution
remove item i with highest vi
▫ Thus, there is no better
solution than the greedy one xi ← min{wi , W − w}
w ← w + min{wi , W − w}
The Knapsack Problem
• The optimal solution to the fractional knapsack
problem can be found with a greedy algorithm.
• The optimal solution to the 0-1 Knapsack problem
cannot be found with the same greedy strategy
▫ Greedy strategy: take in order of dollars/pound
▫ Example: 3 items weighing 10, 20, and 30 pounds,
knapsack can hold 50 pounds
Suppose item 1 is worth $ 75, item 2 is worth $100 and
item 3 is worth $200.
According to greedy algorithm, items 1 and 2 are selected
with total value = $175
But best selection could be item 2 and 3 with total value =
$300.
Shortest paths on a special graph
• Problem: Find a shortest path from v0 to v3.
• The greedy method can solve this problem.
• The shortest path: 1 + 2 + 4 = 7.
Shortest paths on a multi-stage graph
• Problem: Find a shortest path from v0 to v3 in the multi-
stage graph.
• The set of items to take can be deduced from the table, starting
at c[n. w] and tracing backwards where the optimal values
came from. If c[i, w] = c[i-1, w] item i is not part of the
solution, and we are continue tracing with c[i-1, w]. Otherwise
item i is part of the solution, and we continue tracing with c[i-
1, w-W].
Analysis
• This dynamic-0-1-kanpsack algorithm takes θ(nw)
times, broken up as follows: θ(nw) times to fill the c-
table, which has (n +1).(w +1) entries, each requiring
θ(1) time to compute. O(n) time to trace the solution,
because the tracing process starts in row n of the table
and moves up 1 row at each step.
c[i,w]
c[i,w]
• The algorithm computing c[i,w] does not keep record
of which subset of items gives the optimal solution.
• To compute the actual subset, we can add an auxiliary
boolean array keep[i,w] which is 1 if we decide to
take the ith item in c[i, w] and 0 otherwise.
Dynamic-0-1-knapsack (v, w, n, W)
for w 0 to W
do c[0, w] 0
for i 1 to n
do c[i, 0] 0
for w 1 to W
do if wi ≤ w
then if vi + c[i-1, w-wi] > c[i-1, w]
then c[i, w] vi + c[i-1, w-wi]
keep[i,w] 1
else c[i, w] c[i-1, w]
keep[i,w] 0
else c[i, w] = c[i-1, w]
keep[i,w] 0
Constructing the Optimal Solution
Question:
• How do we use values in keep[i, w] to determine the
subset T of items having the maximum value?
• If keep[n,W] is 1, then n∈T, We can repeat this
argument for keep[n-1,W-wn].
• If keep[n,W] is 0, then n∉T, We can repeat this
argument for keep[n-1,W].
• Therefore te following part of the program will
output the elements of T.
K W
for i n downto 1
do if (keep[i,K] ==1)
then print i
K K-wi
Complete Algorithm
Dynamic-0-1-knapsack (v, w, n, W)
for w 0 to W
do c[0, w] 0
for i 1 to n
do c[i, 0] 0
for w 1 to W
do if wi ≤ w
then if vi + c[i-1, w-wi] > c[i-1, w]
then c[i, w] vi + c[i-1, w-wi]
keep[i,w] 1
else c[i, w] c[i-1, w]
keep[i,w] 0
else c[i, w] = c[i-1, w]
keep[i,w] 0
K W
for i n downto 1
do if (keep[i,K] ==1)
then print i
K K-wi
Example
c[i,w]
keep[i,w]
i= 0 0 0 0 0 0 0 0 0 0 0 0 Solution
1 0 0 0 0 0 1 1 1 1 1 1 T={2,4}
2 0 0 0 0 1 1 1 1 1 1 1
3 0 0 0 0 0 0 0 0 0 0 1
4 0 0 0 1 1 1 1 1 1 1 1
Longest increasing subsequence(LIS)
9 2 5 3 7 11 8 10 13 6
L(1) L(2) L(3)
L[i] 1 1 2 2 3 4 ?
36
A naive approach for LIS
9 2 5 3 7 11 8 10 13 6
L[i] 1 1 2 2 3 4 4 5 6 3
37
int maxLength = 1,
bestEnd = 0;
L[0] = 1;
prev[0] = -1;
for (int i = 1; i < N; i++)
{
L[i] = 1;
prev[i] = -1;
for (int j = i - 1; j >= 0; j--)
if (L[j] + 1 > L[i] && array[j] < array[i])
{
L[i] = L[j] + 1;
prev[i] = j;
}
if (L[i] > maxLength)
{
bestEnd = i;
maxLength = L[i];
}
}
Binary search
n/
n
2 n/4
...
Binary search
5 3 3 3 3 3 3 BestEnd[2]
7 7 7 7 7 BestEnd[3]
11 8 8 8 BestEnd[4]
For each position, we perform
a binary search to update 10 10 BestEnd[5]
BestEnd. Therefore, the
13 BestEnd[6]
running time is O(n log n).
Let S[pos] be defined as the smallest integer that ends an
increasing sequence of length pos.
Now iterate through every integer X of the input set and do
the following:
1.If X > last element in S, then append X to the end of S. This
essentialy means we have found a new largest LIS.
2.Otherwise find the smallest element in S, which
is >= than X, and change it to X. Because Sis sorted at any
time, the element can be found using binary search
in log(N).
Total runtime - N integers and a binary search for each of
them - N * log(N) = O(N log N)
Now let's do a real example:
Set of integers: 2 6 3 4 1 2 9 5 8
Steps:
0. S = {} - Initialize S to the empty set
1. S = {2} - New largest LIS
2. S = {2, 6} - New largest LIS
3. S = {2, 3} - Changed 6 to 3
4. S = {2, 3, 4} - New largest LIS
5. S = {1, 3, 4} - Changed 2 to 1
6. S = {1, 2, 4} - Changed 3 to 2
7. S = {1, 2, 4, 9} - New largest LIS
8. S = {1, 2, 4, 5} - Changed 9 to 5
9. S = {1, 2, 4, 5, 8} - New largest LIS
So the length of the LIS is 5 (the size of S).