1 / 53

Approximation via Doubling (Part II)

Approximation via Doubling (Part II). Marek Chrobak University of California, Riverside. Joint work with Claire Kenyon-Mathieu. Doubling method: (for a minimization problem) Choose d 1 < d 2 < d 3 … (typically powers of 2) For j = 1, 2, 3, … Assume that the optimum is ≤ d j

dylan-barry
Download Presentation

Approximation via Doubling (Part II)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximation via Doubling (Part II) Marek Chrobak University of California, Riverside Joint work with Claire Kenyon-Mathieu 1

  2. Doubling method: (for a minimization problem) Choosed1 < d2 < d3 … (typically powers of 2) For j = 1, 2, 3, … Assume that the optimum is ≤ dj Use this bound to construct a solution of cost ≤ C·dj • Simple and effective (works for many problems, offline • and online) • Typically not best possible ratios 2

  3. Online Bidding - Reminder Item for sale of value u(unknown to bidder) Buyer bids d1,d2,d3, … until some dj≥ u Cost: d1 + d2 + … + djOptimum = u Competitive ratio 3

  4. Deterministic Bidding - Upper Bound Doubling strategy: bid 1, 2, 4, … , 2i, … If 2j-1 <u ≤ 2j, the ratio is 4

  5. Online Bidding • Theorem: • The optimal competitive ratio for online bidding is: • 4 in the deterministic case • e 2.72 in the randomized case • Randomized e-ing strategy: choose uniformly random x  [0,1), and bid • e x, e x+1, e x+2 , e x+3 , … • [folklore] [Chrobak, Kenyon, Noga, Young, ‘06] 5

  6. For dj-1 <u ≤ dj+1 (j odd) dj+1 d2 d1 u d3 0 dj-1 dj 2  bidding ratio extra ratio 1 Cow-Path Problem -- Reminder So the ratio = 2  bidding ratio + 1 = 9 for dj= 2j 6

  7. Solution of (r-1)ln(r-1) = r 2e+1 Connection to online bidding does not work in randomized case -- why? • Theorem: • The optimal competitive ratio for the cow-path problem is • 9 in the deterministic case •  4.59 in the randomized case [Gal ‘80] [Baeza-Yates, Culberson, Rawlins ‘93] [Papadimitriou, Yannakakis ‘91] [Kao, Reif, Tate ‘94] … 7

  8. Outline: Online bidding Cow-path Incremental medians (size approximation) Incremental medians (cost approximation) List scheduling on related machines Minimum latency tours Incremental clustering 8

  9. Given a list of jobs (each with a specified processing time), assign them to processors to minimize makespan (max load) 7 4 2 5 1 6 time 3 jobs Online algorithm: assignment of a job does not depend on future jobs Goal: small competitive ratio processors List Scheduling 9

  10. makespan Greedy: Assign each job to the machine with the lightest load 7 7 4 2 5 1 6 4 3 6 5 jobs 2 1 3 processors 10

  11. 7 4 7 2 3 6 5 1 6 2 3 makespan 4 jobs 1 5 better schedule: processors 11

  12. y x m machines Analysis of Greedy: x = min load before placing last job y = length of last job • total load ≥ m·x, so optimum makespan ≥ x • optimum makespan ≥ y • so • greedy’s makespan = x+y • ≤ 2 ·optimum makespan 12

  13. List Scheduling • Greedy is (2-1/m)-competitive [Graham ’66] • Lower bound ≈1.88 • [Rudin III, Chandrasekaran’03] • Best known ratio ≈1.92 • [Albers ‘99] [Fleischer, Wahl ‘00] • Lots of work on randomized algorithms, • preemptive scheduling, … 13

  14. List Scheduling on Related Machines Related machines: machines may have different speeds 1 1 1  0.25  1  0.5 1 jobs processors 1 2 3 14

  15. 2L Algorithm2PACK(L):schedule each job on the slowest machine whose load will not exceed 2L L Hey, the opt makespan is at most L 6 3 5 4 7 1 1 4 5 2 6  0.25  1  0.5 3 2 7 jobs processors 1 2 3 15

  16. 2L L r r 1 2 … … m 1 2 … … m Lemma: If the little birdie is right (opt makespan ≤ L) then 2PACK will succeed. Proof: Suppose 2PACK fails on job h • h’s length on processor 1 ≤ L , so • so load of processor 1 > L • r = first processor with load ≤ L • (or m+1, if no such processor) • Claim: if opt executes k on • a machine in {r,r+1,…,m} then so • does 2PACK optimum 2PACK 16

  17. so k‘s length here ≤ L 2L Lemma: If the little birdie is right (opt makespan ≤ L) then 2PACK will succeed. so k fits on r k k suppose k executed here Proof: Suppose 2PACK fails on job h • h’s length on processor 1 ≤ L , so • so load of processor 1 > L L • r = first processor with load ≤ L • (or m+1, if no such processor) k • Claim: if opt executes k on • a machine in {r,r+1,…,m} then so • does 2PACK r r 1 2 … … m 1 2 … … m optimum 2PACK 17

  18. 2L L 1 2 … … m 1 2 … … m Lemma: If the little birdie is right (opt makespan ≤ L) then 2PACK will succeed. Proof: Suppose 2PACK fails on job h • h’s length on processor 1 ≤ L , so • so load of processor 1 > L • r = first processor with load ≤ L • (or m+1, if no such processor) • In other words: if 2PACK executes • k on a machine in {1,2,…,r-1} then so • does opt • So opt’s (speed-weighted) total load • on processors {1,2,…,r-1} is > (r-1)L r r • So some opt’s processor has load • > L -- contradiction optimum 2PACK 18

  19. Algorithm: 1. Choose d1 < d2 < d3 < … (makespan estimates) Let Bj= 2·( d1 + d2 + … + dj ) “bucket” j : time interval [Bj-1 , Bj ] 2. j = 0 while there are unassigned jobs apply 2PACK with L = dj in bucket j if2PACKfails on job k let j = j+1 and continue (starting with job k) 19

  20. B1 Bj-1 Bj+1 B2 Bj processor 1 2 … m k bucket j k k’ 20

  21. 2  (bidding ratio) Analysis: • Suppose the optimal makespan is u • Choose j such that dj-1 < u≤ dj • Then 2PACK will succeed in j ’th bucket (L = dj) • so algorithm’s makespan ≤ 2·(d1+d2+ … + dj) • and We get ratio 8 for dj = 2j 21

  22. List Scheduling on Related Machines • Theorem: • There is an 8-competitive online algorithm for list scheduling on related machines (to minimize makespan). With randomization the ratio can be improved to 2e. • [Aspnes, Azar, Fiat, Plotkin, Waarts ‘06] • World records: • upper bound ≈ 5.828 (4.311 randomized) • lower bound ≈2.438 (2 randomized) • [Berman, Charikar, Karpinski ‘97] • [Epstein, Sgall ‘00] 22

  23. Outline: Online bidding Cow-path Incremental medians (size approximation) Incremental medians (cost approximation) List scheduling on related machines Minimum latency tours Incremental clustering 23

  24. Minimum Latency Tour X = metric space P = v1v2…vh : path in X Latency of vi on P latencyP(vi) = d(v1,v2) + … + d(vi-1,vi) (Total) latency of P = i latencyP(vi) Minimum Latency Tour Problem: Given X, find a tour (path visiting all vertices) of minimum total latency Goal: polynomial-time approximation algorithm 24

  25. E 15 A D 2 8 F B 11 4 C Total latency = 2 + 4 + 8 + 11 + 15 = 40 25

  26. E A D 2-tour F 4-tour B C Minimum k-Tour Problem: find a shortest k-tour (a path that starts and ends at v1 and visits ≥ k different vertices) 26

  27. Algorithm: 1. Choose d1 < d2 < d3 < … 2. For each k compute the optimal k-tour Tk 3. Choose p(1) < … < p(m) = n s.t. length(Tp(i)) = di (For simplicity assume they exist) 4. Output Q = Tp(1) Tp(2) …Tp(m) (concatenation) Denote Q = q1q2…qn (qi = first point on Q different from q1, q2,…,qi-1) 27

  28. Tp(1) v1 Tp(2) Tp(3) Q 28

  29. s2 T S s1 s3 sk Lemma: S = s1s2…sn : tour with optimum latency. Then latencyS(sk) ≥ (1/2)·length(Tk) Proof: T is a k-tour, so 2·latencyS(sk) = length(T) ≥ length(Tk) 29

  30. 2  (bidding ratio) Analysis: • For p(j-1)<k ≤ p(j) • latencyS(sk) ≥ (1/2)·length(Tk) • ≥ (1/2)·length(Tp(j-1)) = dj-1/2 • qk will be visited in Tp(j) (or earlier), so • latencyQ(qk) ≤ d1+d2+ … + dj We get ratio 8 for dj = 2j … if we can compute k-tours efficiently !!! 30

  31. 7 4 6 12 11 9 10 3 7 5 8 2 5 u 7 x y v t If X is a weighted tree, optimal k-tours can be computed in polynomial time… • Dynamic programming: • W.l.o.g. assume X is a rooted binary tree • optk(u) = minimum of • 2x+optk-1(v), 2y+optk-1(t) and • minj {2x+optj(v)+2y+optk-1-j(t) } Theorem: There is a polynomial-time 8-approximation algorithm for maximum latency tours on weighted trees [Blum, Chalasani, Coppersmith, Pulleyblank, Raghavan, Sudan ‘94] 31

  32. Tp(j) v1 Can we do better? Choose a random direction (clockwise or counter-clockwise) and traverse each Tp(j) in this direction … u Expected latency of u = d1+d2+ …+ dj-1 + dj/2 We get ratio 6 for dj = 2j 32

  33. Can we do even better? Instead of dj = 2j choose dj = cj+x, where c is the constant from the Cow Path problem and x is random in [0,1) • We don’t really really randomization: • choose better direction (clockwise or counter-clockwise) • There are only O(n) x’s that matter, so try them all Theorem: There is a polynomial-time 3.591 -approximation algorithm for maximum latency tours on weighted trees [Goemans, Kleinberg ‘98] Can be extended to arbitrary spaces, with ratio 3.591 [Chauduri, Godfrey, Rao, Talwar ‘03] 33

  34. Outline: Online bidding Cow-path Incremental medians (size approximation) Incremental medians (cost approximation) List scheduling on related machines Minimum latency tours Incremental clustering 34

  35. k-Clustering • X = metric space • For C  X , • diameter(C) = maximum distance between points in C • k-Clustering Problem: Given k, partition X into k disjoint clusters C1,…,Ck to minimize the maximum diameter(Cj) • Offline: • approximable with ratio 2 • [Gonzales ‘85] [Hochbaum, Shmoys ‘85] • lower bound of 2 for polynomial algorithms (unless P = NP) • [Feder, Greene ‘88] [Bern, Eppstein ‘96] 35

  36. E A H D F G B C k=3 3-Clustering with maximum diameter 5 36

  37. E A H D F G B C k=3 3-Clustering with maximum diameter 3 37

  38. different model than incremental medians !!! Incremental k-Clustering • Problem: Maintain k-clustering when • points in X arrive online • allowed operations: • add point to a cluster • merge clusters • create a new singleton cluster • Goal: online competitive algorithm (polynomial-time) 38

  39. A D G C k=3 diameter = 0 39

  40. E A D G C k=3 diameter = 2 40

  41. E A H D G C k=3 diameter = 3 41

  42. E A H D G C k=3 diameter = 3 42

  43. Ci oi radius • Notation and terminology: • Algorithm’s clusters C1,C2,…,Ck’ with k’ ≤ k • in each Ci fix a centeroi • radius of Ci = max distance between xCi and oi • diameter of Ci ≤ 2 · (radius of Ci) 43

  44. Procedure CleanUp(z). Goal: merge some clusters C1,C2,…,Ck’ so that afterwards all inter-center distances are > z Find a maximal set J of clusters with all inter-center distances > z 2. for each cluster Ca J choose Cb J with d(oa,ob) ≤ z merge Cainto Cb (with center ob) 44

  45. z Lemma: If the max radius before CleanUp is h then after CleanUp it is ≤ h+z. Proof: follows from the ∆ inequality 45

  46. z h z Lemma: If the max radius before CleanUp is h then after CleanUp it is ≤ h+z. Proof: follows from the ∆ inequality v 46

  47. Invariant: • inter-center distance > dj checkpoint j Algorithm: 1. Choose d1 < d2 < d3 < … 2. Initially C1,C2,…,Ck are singletons (first k points) (Assume min distance between these points is > 1) 3. j 1 4. repeat when a new point x arrives ifd(x,oi) ≤ dj for some i add x to Ci else if k’ < k k’k’+1; Ck’  {x} else create a temporary cluster Ck+1 {x} while there k+1 clusters jj+1 do CleanUp with z = dj (merge clusters) 47

  48. Example:k = 4 48

  49. dj+1 dj+2 Example:k = 4 49

  50. 2  (bidding ratio) Analysis: • At checkpoint j: • Before clean-up • k+1 clusters with inter-center distances > dj-1 • so opt diameter > dj-1 • After clean-up • max radius ≤ d1 + d2 + … + dj • so max diameter ≤ 2·(d1 + d2+ … + dj) We get ratio 8 for dj = 2j 50

More Related