CA Unit 1
CA Unit 1
Unit-6
Approximation Algorithms
1
• 0/1 Knapsack - O(2n)
2
log n n n log n n2 n3 2n
0 1 0 1 1 2
1 2 2 4 8 4
2 4 8 16 64 16
3 8 24 64 512 256
3
2
60
50
40
n logn
30
20
10
log n
0
0 1 3 4 5 6 7 8 10
4
INTRODUCTION
• NP-hard optimization problems have great practical
importance
• it is desirable to solve large instances of these
problems in a reasonable amount of time.
• The best-known algorithms for NP-hard problems
have a worst-case complexity - exponential in terms of
number of inputs.
• there is still plenty of room for improvement in an
exponential algorithm.
5
• Algorithms with sub-exponential complexity
• 2n/c (for c > 1), 2√n, or 2logn.
• what is needed is an algorithm of low polynomial
complexity O(n) or O(n2) or O(n3).
• If we want to produce an algorithm of low polynomial
complexity to solve NP-hard optimization problem,
• then it is necessary to relax the meaning of “solve”
• we discuss two relaxations of the meaning of “solve”.
6
• In the first, we remove the requirement that the
algorithm that solves the optimization problem P must
always generate an optimal solution.
• This requirement is replaced by the requirement that
the algorithm for P must always generate a feasible
solution with value close to the value of an optimal
solution.
• A feasible solution with value close to the value of an
optimal solution is called an approximate solution.
• An approximation algorithm for P is an algorithm that
generates approximate solutions for P.
7
• In the second relaxation we look for an algorithm for P
that almost always generates optimal solutions with
high probability.
• Algorithms with this property are called
probabilistically good algorithms.
8
©
Terminology :
10
f{(n)
-
Approximation
Let A
be an algorithm that generates a feasible
solution to every instance I of a problem P.
Let F*(1) be the value of an optimal solution to I
Let be the value of the feasible solution
generated by A
A is an f(n) approximation algorithm for problem P,
if and only if for every instance I of size n,
[F*(D FSO
-
<
Fx(1)
f(n)
For F*(I) > 0 11
• ε - Approximation
• An ε-approximation algorithm is an f(n)
approximation algorithm, for which f(n) ≤ ε, for
some constant ε.
12
¢ -
Approximation
Let A
be an algorithm that generates a feasible
solution to every instance I of a problem P.
Let F*(1) be the value of an optimal solution to I
Let be the value of the feasible solution
generated by A
A 1s an € approximation algorithm
-
for problem P,
if and only if for every instance I,
[F*(D FSO-
58
Fx(1)
For F*7)>0O and e>0 13
• Polynomial Time Approximation Scheme
• An approximation scheme is a polynomial time
approximation scheme
• if and only if for every fixed ε > 0,
it has a computing time that is polynomial in terms
of the problem size n
14
• Fully Polynomial Time Approximation
Scheme
• An approximation scheme is a fully polynomial
time approximation scheme
• If it has a computing time that is polynomial in
terms of the problem size n and 1/ε
15
Let A be an algorithm that generates a feasible solution to every instance
I of a problem P. Let be the value of an optimal solution to and J
let F(T) be the value of the feasible solution generated by A.
constant k.
17
• Consider a Knapsack Problem
• n=3 m = 100
• {p1, p2, p3} = {20, 10, 19}
• {w1, w2, w3} = {65, 20, 35}
39
FA(I)
-
30
F*() FAD]
-
-9
-0.3
19
• Consider an instance of 0/1 Knapsack
• n=2 m=4
• {p1, p2} = {100, 20}
• {w1, w2} = {4, 1}
20
F*(I)
-
100
F(I) -20
IF*(1)
-
-80
It is too high.
It is not an approximate solution
F*
-0.8
-
21
• Consider an instance of 0/1 Knapsack
• n=2 m=r
• {p1, p2} = {2, r}
• {w1, w2} = {1, r}
• |F*(I) – F^(I)| = r -2
• r – 2 is not constant.
• So, it is not an absolute approximation scheme.
23
F*(1) L
F\(I) -2
T--2
= -1 2
r
It approaches 1 as r becomes too large.
SO,
|F*U) -
<1
It is 1- approximation scheme. ( value of ¢ is 1)
However, it is not an € -
25
• Absolute Approximation
• Let A be an algorithm that generates a feasible
solution to every instance I of a problem P.
• Let F*(I) be the value of an optimal solution to I
• Let F^(I) be the value of the feasible solution
generated by A
• A is an absolute approximation algorithm for
problem P, if and only if for every instance I of P,
|F*(I) – F^(I)| ≤ k , for some constant k.
26
Examples of Absolute Approximation Schemes
• Planar Graph Coloring
• There are very few NP-hard optimization problems for
which polynomial time absolute approximation
algorithms are known.
• One problem is that of determining the minimum
number of colors needed to color a planar graph
• G = (V,E).
• It is known that every planar graph is four colorable.
• One can easily determine whether a graph is zero, one,
or two colorable. 27
4
2 3
4
5
28
29
1 Algorithm AColor(V,
2 [ Determine an approximation to the minimum number of colors.
3 {
4 if V then return 0;
else if = @ then return 1;
6 else if G is bipartite then return 2;
7 else return 4;
8}
30
Maximum Programs Stored Problem
• Assume that we have n programs and two storage
devices (say disks or tapes).
• We assume the devices are disks.
• Let Zj be the amount of storage needed to store the ith
program.
• Let L be the storage capacity of each disk.
• Determining the maximum number of these n
programs that can be stored on the two disks
• (without splitting a program over the disks) is NP-
hard. 31
• Let, L = 10, n = 4 and {l1, l2, l3, l4} = {2, 4, 5, 6}
• Optimal Solution is – Store programs 1 and 4 on
disk 1 and Store programs 2 and 3 on disk 2
• F*(I) = 4 ………… All 4 programs stored.
• Approximate Solution is store programs 1 and 2 on
disk 1 and program 3 on disk 2.
• F^(I) = 3 ……….. Program 4 is not stored
• |F*(I) – F^(I)| ≤ 1
32
1 Algorithm PStore(/, n, L)
2 // Assume that <l[i+1], 1 <i<n.
3 {
4 t:=1;
5 for 7:=1 to 2 do
6
7 sum :
-
0; //
Amount of disk 7 already assigned
8 while (sum + /[z]) < L do
9
10 write ("Store program", 7, "on disk", 7);
11 sum : = sum + I[i]3 1:=74+ 1;
12 if > n then return;
2
13 }
14 }
15 }
34
• ε - Approximation
• An ε-approximation algorithm is an f(n)
approximation algorithm, for which f(n) ≤ ε, for
some constant ε.
35
¢ -
Approximation
Let A
be an algorithm that generates a feasible
solution to every instance I of a problem P.
Let F*(1) be the value of an optimal solution to I
Let be the value of the feasible solution
generated by A
A 1s an € approximation algorithm
-
for problem P,
if and only if for every instance I,
[F*(D FSO-
58
Fx(1)
For F*7)>0O and e>0 36
Examples of ε – Approximation Schemes
• Scheduling Independent Tasks
• Obtaining minimum finish time schedules on m,
m ≥ 2, identical processors is NP Hard.
• There exists a very simple scheduling rule that
generates schedules with a finish time very close to
that of an optimal schedule.
• An instance I of the scheduling problem is defined by
a set of n tasks ti, 1 ≤ i ≤ n,
• and m, the number of processors.
37
• The scheduling rule is known as the LPT (longest
processing time) rule.
• An LPT schedule is a schedule that results from this
rule.
38
Example 12.6 Let m = 3,n -
6 7 8 11
1 6
P, 2 5
P, 3 4
39
Example 12.7 Let m = 3, n = 7, and (tj, to, tg, t4, t5, tg, ¢7) = (5,5,4,
4,3,3,3). Figure 12.3(a) shows the LPT schedule. This has a finish time of
11. Figure 12.3(b) shows an optimal schedule. Its finish time is 9. Hence,
for this instance |F*(I) F(I)|/F*(I) = (11 9)/9 = 2/9. o
- -
0 4 5 8 11
1 5 7
2 6
3 4
41
Example 12.9 Let L = 10, n= 6, and (11,lo,13,4,15, ls) = (5,6,8,7,5,4).
Figure 12.5 shows a packing of the six objects using only three bins. Numbers
in bins are object indices. Obviously, at least three bins are needed. 0
5 6
4
1
3
42
Polynomial Time
Approximation Schemes
43
• Polynomial Time Approximation Scheme
• An approximation scheme is a polynomial time
approximation scheme
• if and only if for every fixed ε > 0,
it has a computing time that is polynomial in terms
of the problem size n
44
Polynomial Time Approximation Schemes
45
Example - Polynomial Time Approximation
Schemes
• Scheduling Independent Tasks
• A polynomial time approximation scheme can be used
for scheduling independent tasks.
• Let k be some specified fixed integer.
• Obtain an optimal schedule for the k longest tasks.
• Schedule the remaining n-k tasks using the LPT rule.
46
Example 12.12 Let m
= 2, n
= =
6, (t1,to,t3,t4,t5,t6) (8,6, 5, 4,4, 1),
and k = 4, The four longest tasks have task times 8, 6, 5, and 4 respectively.
An optimal schedule for these has finish time 12 (Figure 12.8(a)). When
the remaining two tasks are scheduled using the LPT rule, the schedule ot
Figure 12.8(b) results, This has finish time 15. Figure 12.8(c) shows an
optimal schedule. This has finish time 14.
0
47
12 15 14
1 4 1 4 6 1 2
2 3 2 3 5 3 4 5 6
48
0/1 Knapsack
Example 12.13 Consider the knapsack problem instance with n = 8 ob-
jects, size of knapsack
= m = 110, p = {11,21,31,33,43,53,55,65}, and
w = {1,11,21,23,33,43,45,55}.
The optimal solution is obtained by putting objects 1, 2, 3, 5, and 6 into
the knapsack. This results in an optimal profit p* of 159 and a weight of
109.
Order I1 I2 I3 I4 I5 I6 I7 I8
Add items I1 I2 I3 I4 I5
Profit = 139 Weight = 89
Works in Polynomial Time 50
Fully Polynomial Time
Approximation Schemes
51
• Fully Polynomial Time Approximation
Scheme
• An approximation scheme is a fully polynomial
time approximation scheme
• If it has a computing time that is polynomial in
terms of the problem size n and 1/ε
52
Fully Polynomial Time Approximation
Scheme
• The approximation algorithms and schemes we have
seen so far are particular to the problem considered.
• There is no set of well-defined techniques that we can
use to obtain such algorithms.
• The heuristics used depend very much on the
particular problem being solved.
• approach is identical to the dynamic programming
solution methodology.
53
• A family of algorithms that can achieve any
approximation ε > 0 in time polynomial in both 1/ε
and n is called a fully polynomial time
approximation scheme or FPTAS.
• Fully polynomial-time approximation schemes do
not appear to exist for many problems,
54
Rounding
• The aim of rounding is to start from a problem
instance I and to transform it to another problem
instance I’ that is easier to solve.
• This transformation is carried out in such a way
that the optimal solution value of I’ is close to the
optimal solution value of I.
• The value of ε, which represent the bound on the
fractional difference between the exact and
approximate solution values is provided.
55
F+U)-FA( )|
_
Fx(1)
Where
56
• Consider an instance of 0/1 Knapsack
• {p1, p2, p3, p4} = {1.1, 2.1, 1001.6, 1002.3}
• Solving it consumes too much time.
gins {Uy
gil (0,1.1
gre] (0.1.1,2.1,3.2
aia] (0, 1.1,2.1, 3.2, 1001.6, 1002.7, 1003.7, 1004.3)
{0,1.1,2.1,3.2,1001.6, 1002.3, 1002.7, ] 1008.7,
1004.4, 1084.8, 1045.5, 2003.9, 2005 2006. 2017.1
• F*(I) = 2007.1
57
¢
Round the values and transform to I'
Q>> Q3; 4} {0, 0, 1000, 1000}
fi}
gl) fo}
St) fo, won}
1004, 20047}
FA(I) -2000
©
F « (I) -
FA(I')|
-
2007.1 -
2000 -
7.1
( )
< 9.907
ae
Fx(1)
58
• I’ can be solved in 1/4th time compared to I.
• Inaccuracy is less than 0.7%
59
Probabilistically
Good Algorithms
60
Probabilistically Good Algorithms
61
• The requirement of bounded performance tends to
categorize other algorithms
• that usually work well as being bad.
• Some algorithms with unbounded performance may in
fact almost always either solve the problem exactly
• or generate a solution that is exceedingly close in
value to the value of an optimal solution
62
• Such algorithms are good in a probabilistic sense.
• If we pick a problem instance I at random,
• then there is a very high probability that the algorithm
will generate a very good approximate solution.
63
Finding Hamiltonian Cycle in Undirected
Graph
• Let G = (V, E) be a connected graph with n
vertices.
• A Hamiltonian cycle (suggested by Sir William
Hamilton) is a
• round-trip path along n edges of G
• that visits every vertex once and returns to its
starting position.
64
• Hamiltonian cycle begins at some vertex
• and the vertices of G are visited in the order
v1, v2, .. . , vn+1
• then the edges (vi, vi+i) are in E, 1 <i <n,
• and the vi are distinct except for v1 and vn+1
• which are same.
65
1 2 3 4
Gl
8 7 6 5
G2
5 4
68
• 3] j is already on path P. Now there is an unique edge e
= (j, m) in P such that
• the deletion of e from and the inclusion of (k, j) to P
result in a simple path.
• Then e is deleted and (k, j) is added to P.
• P is now a simple path with end point m
69
Analysis
• The algorithm is constrained so that case 3 does not
generate two paths of the same length having the same
end point.
• With a proper choice of data representations this
algorithm can be implemented to run in time O(n2)
• Where n is the number of vertices in graph G
• This algorithm does not always find a Hamiltonian
cycle in a graph that contains such a cycle.
• It works fine with certain probability.
70
Email:- suhas.bhagate@gmail.com
71