1 / 39

Chapter 8 PD-Method and Local Ratio

Chapter 8 PD-Method and Local Ratio. (4) Local ratio. This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda. Introduction. The local ratio technique is an approximation paradigm for NP-hard optimization to obtain approximate solutions

Download Presentation

Chapter 8 PD-Method and Local Ratio

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 PD-Method and Local Ratio (4) Local ratio This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda

  2. Introduction • The local ratio technique is an approximation paradigm for NP-hard optimization to obtain approximate solutions • Its main feature of attraction is its simplicity and elegance; it is very easy to understand, and has surprisingly broad applicability.

  3. A Vertex Cover Problem:Network Testing • A network tester involves placing probes onto the network vertices. • A probe can determine if a connected link is working correctly. • The goal is to minimize the number of used probes to check all the links.

  4. A Vertex Cover Problem:Precedence Constrained Scheduling • Schedule a set of jobs on a single machine; • Jobs have precedence constraints between them; • The goal is to find a schedule which minimizes the weighted sum of completion times. This problem can be formulated as a vertex cover problem [Ambuehl-Mastrolilli’05]

  5. The Local Ratio Theorem(for minimization problems) Let w = w1 + w2. If x is an r-approximate solution for w1 and w2 then x is r-approximate with respect to w as well. Proof Note that the theorem holds even when negative weights are allowed.

  6. Vertex Cover example Weight functions: 62 13 41 W = [41, 62, 13, 14, 35, 26, 17] W1 = [ 0, 0, 0, 14, 14, 0, 0] W2 = [41, 62, 13, 0, 21, 26, 17] 35 26 14 W = W1 + W2 17

  7. 62 62 0 13 13 0 0 41 41 14 21 35 26 26 0 14 0 14 17 17 0 Vertex Cover example (step 1) = + Note: any feasible solution is a 2-approximate solution for weight function W1

  8. 62 62 0 13 13 0 0 41 41 21 21 0 26 21 5 0 0 0 17 17 0 Vertex Cover example (step 2) = +

  9. 41 21 62 13 13 0 41 41 0 0 0 0 5 5 0 0 0 0 17 17 0 Vertex Cover example (step 3) = +

  10. 13 21 8 13 13 0 0 0 0 0 0 0 5 5 0 0 0 0 17 17 0 Vertex Cover example (step 4) = +

  11. 8 0 8 0 0 0 0 0 0 0 0 0 5 5 0 0 0 0 12 17 5 Vertex Cover example (step 5) = +

  12. Vertex Cover example (step 6) 41 62 13 • The optimal solution value of the VC instance on the left is zero. • By a recurrent application of the Local Ratio Theorem we are guaranteed to be within 2 times the optimal solution value by picking the zero nodes. • Opt = 120 Approx = 129 8 0 0 35 14 26 0 0 0 12 17

  13. 2-Approx VC (Bar-Yehuda & Even 81)Iterative implementation – edge by edge For each edge {u,v} do: Let  = min {w(u), w(v)}. w(u) w(u) - . w(v) w(v) - . Return {v | w(v) = 0}.

  14. Recursive implementation • If a zero-cost solution can be found, return one. • Otherwise, find a suitable decomposition of w into two weight functions w1 and w2 = w −w1, and solve the problem recursively, using w2 as the weight function in the recursive call. The Local Ratio Theorem leads naturally to the formulation of recursive algorithms with the following general structure

  15. 2-Approx VC (Bar-Yehuda & Even 81)Recursive implementation – edge by edge • VC (V, E, w) • If E= return ; • If w(v)=0 return {v}+VC(V-{v}, E-E(v), w); • Let (x,y)E; • Let  = min{p(x), p(y)}; • Define w1(v) =  if v=x or v=y and 0 otherwise; • Return VC(V, E, w- w1)

  16. Algorithm Analysis We prove that the solution returned by the algorithm is 2-approximate by induction on the recursion and by using the Local Ratio Theorem. • In the base case, the algorithm returns a vertex cover of zero cost, which is optimal. • For the inductive step, consider the solution returned by the recursive call. By the inductive hypothesis it is 2-approximate with respect to w2. We claim that it is also 2-approximate with respect to w1 . In fact, every feasible solution is 2-approximate with respect to w1 .

  17. Generality of the analysis • The proof that a given algorithm is an r-approximation algorithm is by induction on the recursion. • In the base case, the solution is optimal (and, therefore, r-approximate) because it has zero cost, and in the inductive step, the solution returned by the recursive call is r-approximate with respect to w2 by the inductive hypothesis. • Thus, different algorithms differ from one another only in the choice of w1, and in the proof that every feasible solution is r-approximate with respect to w1.

  18. The key ingredient Different algorithms (for different problems), differ from one another only in the decomposition of W, and this decomposition is determined completely by the choice of W1. W2 = W – W1

  19. The creative part…find r-effective weights w1 is fully r-effective if there exists a number b such that b w1 · x r · b for all feasible solutions x

  20. Framework The analysis of algorithms in our framework boils down to proving that w1 is r-effective. Proving this amounts to proving that: • b is a lower bound on the optimum value, • r ·b is an upper bound on the cost of every feasible solution …and thus every feasible solution is r-approximate (all with respect to w1).

  21. ( ) w x h L V i i i t t 2 e x w m n m u m " = ( ) d x A different W1 for VC star by star (Clarkson’83) 62 13 16/4 0 58 13 41 16/4 37 = + 16 35 26 16 16/4 0 0 31 26 17 16/4 13 Let d(x) be the degree of vertex x

  22. ( ) w x h L V i i i t t 2 e x w m n m u m " = ( ) d x A different W1 for VC star by star   b = 4 · is a lower bound on the optimum value, 2 ·b is an upper bound on the cost of every feasible solution W1 is 2-effective 0  4  0 

  23. ( ) w x L i t e " m n = V 2 ( ) x d x Another W1 for VChomogeneous (= proportional to the potential coverage) 4  2  3  b = |E| · is a lower bound on the optimum value, 2 ·b is an upper bound on the cost of every feasible solution W1 is 2-effective 5  3  4  3 

  24. Partial Vertex Cover Input: VC with a fixed number k Goal: Identify a minimum cost subset of vertices that hits at least k edges 62 13 41 25 26 14 • Examples: • if k = 1 then OPT = 13 • if k = 3 then OPT = 14 • if k = 5 then OPT = 25 • if k = 6 then OPT = 14+13 17

  25. Partial Vertex Cover Weight functions: w=[41, 62, 13, 14, 25, 26, 17] } w1=[ 0, 0, 0, 14, 14, 0, 0] 62 13 41 w = w1 + w2 w2=[41, 62, 13, 0, 11, 26, 17] Assume k < |E| (number of edges) Note: any feasible solution is NOT a 2-approximate solution for weight function w1 25 26 14 In VC every edge must be hit by a vertex. In partial VC, k vertices are sufficient. So the optimum for w1 is 0 (k<=5); vice versa the solution that takes for example vertex 4 is infinite many times larger than the optimum 17

  26. Positive Weight Function • We do not know of any single subset that must contribute to all solutions. • To prevent OPT from being equal to 0, we can assign a positive weight to every element.

  27. Positive Weight Function Weight functions: w=[41, 62, 13, 14, 25, 26, 17] 62 13 41 w1=[ 0, 0, 0, 14, 14, 0, 0] w2=[41, 62, 13, 0, 11, 26, 17] w = w1 + w2 25 26 14 Observe that 14 is NOT a lower bound of the optimal value! For example for k=1 then 13 is the optimal value. 17

  28. Positive Weight Function Let d(x) be the degree of vertex x What is the amortized cost to hit one edge by using x ? What is the minimal amortized cost to hit any edge? x

  29. Positive Weight Function W1 62 13 41 w1(x) =  · min{ d(x) , k } For k = 3 then  = 14/3 Weight functions (k=3): w = [41, 62, 13, 14, 25, 26, 17] w1= [14, 14, 28/3, 14, 14, 14, 14] w2= [27, 48, 11/3, 0, 11, 12, 3] 25 26 14 17 w = w1 + w2

  30. Function W1 • [Lower Bound] Every feasible solution costs at least k = 14 • [Upper Bound] There are feasible solutions whose value can be arbitrarily larger than k (e.g. take all the vertices) • But if you take all the vertices then not all of them are strictly necessary!! • We can focus on Minimal Solutions!!! 14 28/3 14 14 14 14 14

  31. Minimal Solutions • By minimal solutionwe mean a feasible solution that is minimal with respect to set inclusion, that is, a feasible solution whose proper subsets are all infeasible. • Minimal solutions are meaningful mainly in the context of covering problems (covering problems are problems for which feasible solutions are monotone inclusion-wise, that is, if a set X is a feasible solution, then so is every superset of X; MST is not a covering problem).

  32. Minimal Solutions:r-effective weights w1 is r-effective if there exists a number b such that b w1 · x r · b for all minimal feasible solution x

  33. The creative part…again find r-effective weights • If we can show that our algorithm uses an r-effective w1 and returns minimal solutions, we will have essentially proved that it is an r-approximation algorithm. • Designing an algorithm to return minimal solutions is quite easy. • Most of the creative effort is therefore expended in finding an r-effective weight function (for a small r).

  34. 2-effective weight function • In terms of w1 every feasible solution costs at least  · k • In terms of w1 every minimal feasible solution costs at most 2·· k • Minimal solution = any proper subset is not a feasible solution

  35. Proof of 2. (= costs at most 2·· k)

  36. Proof of 2. (cont.) d1(x) = 2 d2(x) = 3 x

  37. ( ) b h f d d b h f d h h b L C S i t t t t t t t t e e e s e o e g e s a n x e e s e o e g e s a a r e y x The approximation algorithm Algorithm from Bar-Yehuda et al. “Local Ratio: A Unified Framework for Approximation Algorithms” ACM Computing Surveys, 2004

  38. Algorithm Framework • If a zero-cost minimal solution can be found, do: optimal solution. • Otherwise, if the problem contains a zero-cost element, do: problem size reduction. • Otherwise, do: weight decomposition.

  39. Thanks, end.

More Related