1 / 127

Randomization in Graph Optimization Problems

Randomization in Graph Optimization Problems. David Karger MIT http://theory.lcs.mit.edu/~karger. Randomized Algorithms. Flip coins to decide what to do next Avoid hard work of making “right” choice Often faster and simpler than deterministic algorithms

Download Presentation

Randomization in Graph Optimization Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Randomization in Graph Optimization Problems David Karger MIT http://theory.lcs.mit.edu/~karger

  2. Randomized Algorithms • Flip coins to decide what to do next • Avoid hard work of making “right” choice • Often faster and simpler than deterministic algorithms • Different from average-case analysis • Input is worst case • Algorithm adds randomness

  3. Methods • Random selection • if most candidate choices “good”, then a random choice is probably good • Random sampling • generate a small random subproblem • solve, extrapolate to whole problem • Monte Carlo simulation • simulations estimate event likelihoods • Randomized Rounding for approximation

  4. Cuts in Graphs • Focus on undirected graphs • A cut is a vertex partition • Value is number (or total weight) of crossing edges

  5. Optimization with Cuts • Cut values determine solution of many graph optimization problems: • min-cut / max-flow • multicommodity flow (sort-of) • bisection / separator • network reliability • network design Randomization helps solve these problems

  6. Presentation Assumption • For entire presentation, we consider unweighted graphs (all edges have weight/capacity one) • All results apply unchanged to arbitrarily weighted graphs • Integer weights = parallel edges • Rational weights scale to integers • Analysis unaffected • Some implementation details

  7. Basic Probability • Conditional probability • Pr[AÇB] = Pr[A] × Pr[B | A] • Independent events multiply: • Pr[AÇB] = Pr[A] × Pr[B] • Union Bound • Pr[X ÈY] £ Pr[X] + Pr[Y] • Linearity of expectation: • E[X + Y] = E[X] + E[Y]

  8. Random Selection forMinimum Cuts Random choices are good when problems are rare

  9. Minimum Cut • Smallest cut of graph • Cheapest way to separate into 2 parts • Various applications: • network reliability (small cuts are weakest) • subtour elimination constraints for TSP • separation oracle for network design • Nots-t min-cut

  10. Max-flow/Min-cut • s-t flow: edge-disjoint packing of s-t paths • s-t cut: a cut separating s and t • [FF]: s-t max-flow = s-t min-cut • max-flow saturates all s-t min-cuts • most efficient way to find s-t min-cuts • [GH]: min-cut is “all-pairs” s-t min-cut • find using n flow computations

  11. Flow Algorithms • Push-relabel [GT]: • push “excess” around graph till it’s gone • max-flow in O*(mn)(note: O* hides logs) • recent O*(m3/2)[GR] • min-cut in O*(mn2) --- “harder” than flow • Pipelining [HO]: • save push/relabel data between flows • min-cut in O*(mn) --- “as easy” as flow

  12. Contraction • Find edge that doesn’t cross min-cut • Contract (merge) endpoints to 1 vertex

  13. Contraction Algorithm • Repeat n - 2 times: • find non-min-cut edge • contract it (keep parallel edges) • Each contraction decrements #vertices • At end, 2 vertices left • unique cut • corresponds to min-cut of starting graph

  14. Picking an Edge • Must contract non-min-cut edges • [NI]: O(m)time algorithm to pick edge • n contractions: O(mn)time for min-cut • slightly faster than flows If only could find edge faster…. Idea: min-cut edges are few

  15. Randomize Repeat until 2 vertices remain pick a random edge contract it (keep fingers crossed)

  16. Analysis I • Min-cut is small---few edges • Suppose graph has min-cut c • Then minimum degree at least c • Thus at least nc/2 edges • Random edge is probably safe Pr[min-cut edge] £c/(nc/2) = 2/n (easy generalization to capacitated case)

  17. Analysis II • Algorithm succeeds if never accidentally contracts min-cut edge • Contracts #vertices from n down to 2 • When k vertices, chance of error is 2/k • thus, chance of being right is 1-2/k • Pr[always right] is product of probabilities of being right each time

  18. Analysis III …not too good!

  19. Repetition • Repetition amplifies success probability • basic failure probability 1 - 2/n2 • so repeat 7n2 times

  20. How fast? • Easy to perform 1 trial in O(m) time • just use array of edges, no data structures • But need n2 trials: O(mn2) time • Simpler than flows, but slower

  21. An improvement [KS] • When k vertices, error probability 2/k • big when k small • Idea: once k small, change algorithm • algorithm needs to be safer • but can afford to be slower • Amplify by repetition! • Repeat base algorithm many times

  22. (50-50 chance of avoiding min-cut) Recursive Algorithm Algorithm RCA (G, n ) {G has n vertices} repeat twice randomly contract G to n/2½ vertices RCA(G,n/21/2)

  23. Main Theorem • On any capacitated, undirected graph, Algorithm RCA • runs in O*(n2) time with simple structures • finds min-cut with probability ³ 1/log n • Thus, O(log n) repetitions suffice to find the minimum cut (failure probability 10-6) in O(n2 log2n) time.

  24. Proof Outline • Graph has O(n2)(capacitated) edges • So O(n2) work to contract, then two subproblems of size n/2½ • T(n) = 2 T(n/2½) + O(n2) = O(n2log n) • Algorithm fails if both iterations fail • Iteration succeeds if contractions and recursion succeed • P(n)=1 - [1 - ½ P(n/2½)]2 = W (1 / log n)

  25. Failure Modes • Monte Carlo algorithms always run fast and probably give you the right answer • Las Vegas algorithms probably run fast and always give you the right answer • To make a Monte Carlo algorithm Las Vegas, need a way to check answer • repeat till answer is right • No fast min-cut check known (flow slow!)

  26. How do we verify a minimum cut?

  27. Enumerating Cuts The probabilistic method, backwards

  28. Cut Counting • Original CA finds any given min-cut with probability at least 2/n(n-1) • Only one cut found • Disjoint events, so probabilities add • So at most n(n-1)/2 min-cuts • probabilities would sum to more than one • Tight • Cycle has exactly this many min-cuts

  29. Enumeration • RCA as stated has constant probability of finding any given min-cut • If run O(log n) times, probability of missing a min-cut drops to 1/n3 • But only n2 min-cuts • So, probability miss any at most 1/n • So, with probability 1-1/n, find all • O(n2 log3n) time

  30. Generalization • If G has min-cut c, cut £ac is a-mincut • Lemma: contraction algorithm finds any given a-mincut with probability W (n-2a) • Proof: just add a factor to basic analysis • Corollary: O(n2a)a-mincuts • Corollary: Can find all in O*(n2a) time • Just change contraction factor in RCA

  31. Summary • A simple fast min-cut algorithm • Random selection avoids rare problems • Generalization to near-minimum cuts • Bound on number of small cuts • Probabilistic method, backwards

  32. Random Sampling

  33. Random Sampling • General tool for faster algorithms: • pick a small, representative sample • analyze it quickly (small) • extrapolate to original (representative) • Speed-accuracy tradeoff • smaller sample means less time • but also less accuracy

  34. A Polling Problem • Population of size m • Subset of c red members • Goal: estimate c • Naïve method: check whole population • Faster method: sampling • Choose random subset of population • Use relative frequency in sample as estimate for frequency in population

  35. Analysis: Chernoff Bound • Random variables XiÎ [0,1] • Sum X = åXi • Bound deviation from expectation Pr[|X-E[X]|³e E[X]] < exp(-e2E[X]/ 4) • “Probably, X Î(1±e) E[X]” • If E[X] ³ 4(ln n)/e2, “tight concentration” • Deviation by eprobability < 1 / n

  36. Application to Polling • Choose each member with probability p • Let X be total number of reds seen • Then E[X]=pc • So estimate ĉ by X/p • Note ĉ accurate to within 1±e iff X is within 1±e of expectation: ĉ = X/p Î(1±e) E[X]/p = (1±e) c

  37. Analysis • Let Xi=1 if ith red item chosen, else 0 • Then X= åXi • Chernoff Bound applies • Pr[deviation bye] <exp(-e2pc/ 4) • < 1/nifpc > 4(log n)/e2 • Pretty tight • if pc < 1,likely no red samples • so no meaningful estimate

  38. Sampling for Min-Cuts

  39. Min-cut Duality • [Edmonds]: min-cut=max tree packing • convert to directed graph • “source” vertex s (doesn’t matter which) • spanning trees directed away from s • [Gabow] “augmenting trees” • add a tree in O*(m) time • min-cut c (via max packing) in O*(mc) • great if m and c are small…

  40. Example min-cut 2 2 directed spanning trees directed min-cut 2

  41. Random Sampling • Gabow’s algorithm great if m, c small • Random sampling • reduces m, c • scales cut values (in expectation) • if pick half the edges, get half of each cut • So find tree packings, cuts in samples Problem: maybe some large deviations

  42. Sampling Theorem • Given graph G, build a sample G(p)by including each edge with probability p • Cut of value vin Ghas expected value pv in G(p) • Definition:“constant” r= 8 (ln n)/e2 • Theorem: With high probability, all exponentially many cuts in G(r/ c) have (1±e) times their expected values.

  43. A Simple Application • [Gabow] packs trees in O*(mc) time • Build G(r/ c) • minimum expected cut r • by theorem, min-cut probably near r • find min-cut in O*(r m) timeusing [Gabow] • corresponds to near-min-cut in G • Result: (1+e)times min-cut in O*(m/e2) time

  44. Proof of Sampling: Idea • Chernoff bound says probability of large deviation in cut value is small • Problem: exponentially many cuts. Perhaps some deviate a great deal • Solution: showed few small cuts • only small cuts likely to deviate much • but few, so Chernoff bound applies

  45. Proof of Sampling • Sampled with probability r/c, • a cut of value ac has mean ar • [Chernoff]: deviates from expected size by more than e with probability at most n-3a • At most n2acuts have value ac • Pr[any cut of value ac deviates] = O(n-a) • Sum over alla ³ 1

  46. Las Vegas Algorithms Finding Good Certificates

  47. Approximate Tree Packing • Break edges into c /r random groups • Each looks like a sample at rate r/ c • O*( rm / c) edges • each has min expected cut r • so theorem says min-cut (1 – e) r • So each has a packing of size (1 – e) r • [Gabow] finds in time O*(r2m/c) per group • so overall time is (c/r ) × O*(r2m/c) = O*(rm)

  48. Las Vegas Algorithm • Packing algorithm is Monte Carlo • Previously found approximate cut (faster) • If close, each “certifies” other • Cut exceeds optimum cut • Packing below optimum cut • If not, re-run both • Result: Las Vegas, expected time O*(rm)

  49. Exact Algorithm • Randomly partition edges in two groups • each like a ½ -sample: e =O*(c-½) • Recursively pack trees in each half • c/2 - O*(c½) trees • Merge packings • gives packing of size c - O*(c½) • augment to maximum packing: O*(mc½) • T(m,c)=2T(m/2,c/2)+O*(mc½) = O*(mc½)

More Related