1 / 52

The Submodular Welfare Problem

The Submodular Welfare Problem. Lecturer: Moran Feldman Based on “Optimal Approximation for the Submodular Welfare Problem in the Value Oracle Model” By Jan Vondrák. Talk Outline. Preliminaries and the problems Reducing to continuous problems Approximating the continuous problems

eriver
Download Presentation

The Submodular Welfare Problem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Submodular Welfare Problem Lecturer: Moran Feldman Based on “Optimal Approximation for the Submodular Welfare Problem in the Value Oracle Model” By Jan Vondrák

  2. Talk Outline • Preliminaries and the problems • Reducing to continuous problems • Approximating the continuous problems • Constructing an integral solution • Summary

  3. Combinatorial Auctions Instance • A set P of n players • A set Q of m items • Utility function wj: 2Q + for each player. Objective • Let Qj Qdenote the set of items the jth player gets. • The utility of the jth player is wj(Qj). • Distribute the items among the players, maximizing the sum of utilities.

  4. Combinatorial Auction - Example • Items • Classrooms • Markers • Multimedia keys • Players • TAs TA with a presentation TA without a presentation

  5. We assume this solution Oracles Obstacle The utility functions wj have exponential size in the number of items. Solutions • Considering special class of utility functions with polynomial-size representation. • Accessing the utility functions through oracles: • Value Oracle - Given a set Qj of items, returns the value wj(Qj). • Demand oracle - Given an assignment p:Q of prices to items, finds a set Qj maximizing . More powerful oracle.

  6. Utility Functions Assumptions about the utility functions: Assuming nothing Assuming monotonesubmodular * * Approximation with polynomial number of oracle queries: None possible 1–1/e approximation at best * Under the value oracle model

  7. Set Function Properties Set Function Properties Given a set function : 2S +, we say that  is: • Monotone – if (A) ≤ (B) for any A B  S. • Submodular – if (A  B) + (A  B) ≤ (A) + (B) for any A,B  S. • Say that wj is submodular, so what? • Consider a player j receiving a set Qj of items. • Let v be the additional value j would get if we assigned item q to j. Formally: v = wj(Qj  q) – wj(Qj) • Assign to j additional set of items Q’j. • If we assigned item q to j now, j would get no more additional value than before. Formally: v ≥ wj(Qj  Q’j  q) – wj(Qj Q’j) The utility of an item diminishes as the player gets more items!

  8. The Submodular Welfare Problem (SWP) Instance • A set P of n players • A set Q of m items • Monotone submodular utility function wj: 2Q  + for each player, accessed via value oracles. Objective • Find a partition Q1, Q2, …, Qn of Q maximizing the total utility:

  9. Matroid Definition An ordered pair (X,  ), with   2X (the sets in  are called “independent sets”), such that: • There is an independent set:   • Monotonicity: A  B, B   A   • Augmentation: If A,B   and |A| < |B|, then there exists b  B - A such that A  {b}  . Motivation: Generalization of many well known concepts: • Given a vectors space, the sets of independent vectors form a matroid. • Given a graph, the set of forest sub-graphs form a matroid.

  10. b d a f c e Example – Forest Sub Graphs Matroid X = {ab, bd, df, ef, ce, ca, bc, be, de} Independent set Example S = {ab, bc, df, ef}  Property 1:     = {S X | S is a forest}

  11. b d a f c e Example – Forest Sub Graphs Matroid Property 2: Monotonicity Removing edges from a forest leaves it a forest. Property 3: Augmentation Given two forests A,B with |A| < |B|, there is an edge e B, such that A  {e} is also a forest. Why? No forest can have more than |A| edges inside connected components of A.

  12. Submodular Maximization Subject to a Matroid Constraint (SMSMC) Instance • A groundset X. • A monotone submodular function : 2X  +, accessed via a value oracle. • Matroid M = (X,  ) accessed via a membership oracle. Objective • Find an independent set of value: Motivation Generalize known problems, for example: • SWP – (will be proven in a moment) • Max k-Cover – Find in a group S1, S2, …, Sn, k sets of maximal union. • Multiple Knapsack (exponential reduction) – Same as Knapsack, but multiple Knapsacks are available.

  13. SWP SMSMC SMSMC Reduction Theorem SWP can be reduced to SMSMC. The Reduction • Groundset: X = P Q • Given a set S  X, let Sj = {q  Q | (j, q)  S}, define : 2X + as: • The matroid M = (X,  ) enforces the restriction that an item may be assigned to at most one player: Corollary We can focus from now on SMSMC.

  14. Switching to the Continuous World

  15. Continues Set Functions Motivation • Let X be a groundset, and consider y [0, 1]X • Intuitively, we can think of y as selecting each item j X to the extent yj. • We want extend the properties of set functions to the continues case. Notation • (x  y)i= max {xi,yi}  x  y is a generalization of union • (x  y)i= min {xi,yi}  x  y is a generalization of intersection x  y = (0.6, 1, 0.3, 0.6, 1) x = (0.4, 0, 0.3, 0.6, 1) y = (0.6, 1, 0.2, 0.2, 0.7) x  y = (0.4, 0, 0.2, 0.2, 0.7) Definitions Let F: [0, 1]X , F is: • monotone if x  y F(x)  F(y) • submodular if F(x  y) + F(x  y)  F(x) + F(y)

  16. Smooth Monotone Submodularity Definition A function F: [0, 1]X  is smooth monotone submodular if: • F has a second derivation throughout its domain. • For each j X, everywhere.  F is monotone. • For any i,j  X (possibly i=j), everywhere.  F issubmodular ( is non-increasing with respect to yi). F is concave in all non-negative directions.

  17. Extension by Expectation Objective Given a set function  we want to extend it into a continuous function F. • Extension: F should coincide with  for integral values. • Properties Preservation: F should be smooth monotone submodular, assuming  is monotone and submodular. Extension • For y [0, 1]X, let us denote by ỹ the random set obtained from y by selecting item j X with probability yj. • Let :2X be a monotone submodular function. • The canonical extension of  into a continuous function is: • Extension: If y is integral, ỹ contains exactly the items selected by y. • Properties Preservation: We need to prove that F is a smooth monotone submodular function.

  18. Follows from the monotonicity of  Follows from the submodularity of  Properties of F • The monotonicity requirement: • The submodularity requirement for i j: • The submodularity requirement for i= j:

  19. {a} Example {a, b} X = {a, b, c}  = { {a,b}, {b,c}, {a}, {b}, {c}, } Characteristic vectors: {a, b} (1, 1, 0) {c} (0, 0, 1)  (0, 0, 0) {c} {b} {b, c} Matroid Polytopes Definition • For any set S  X we let 1S represent the characteristic vector of S in [0, 1]X. • Given a matroid M = (X,  ), its matroid polytopesP(M) is the set of convex combinations of vectors from:

  20. {a} {a, b} {c} {b} {b, c} Matroid Polytopes - Properties Definition A polytopes P  +X is called down-monotone if for any 0  x  y, y  P x P. Lemma For any matroid M, P(M) is down-monotone. Proof • Given 0  x  y  P(M),the following procedure gets us from x to y without leaving P(M). • Procedure: • While yj > xj need to be decreased: • Find a set S   in the convex combination of y containing j. • Since M is matroid, S’ = S – { j }  . • Replace S by S’ in the convex combination continuously, until xj = yj or S is completely removed.

  21. Approximating the Continuous Case

  22. E [ỹ(2)] is high E [ỹ(1)] is low Marginal Value Definition • Let : 2X  be a set function. • Given a set S of items, the marginal value of item j is S(j) = (S j) – (S). •  is submodular  the marginal value of every item j diminishes as more items are added to S. What is it good for? • Continuous optimization algorithms often use F as guidance. • Implicitly, the derivative F/yj is used to estimate the importance of increasing yj. • Analogously, we use E [ỹ(j)] to estimate the importance of increasing yj.

  23. The Continuous Greedy Algorithm Input Matroid M = (X,  ), monotone submodular function : 2X +. Pseudo Code • Set  1/n2 (n = |X|), t  0 and y(0)  0. • While t < 1 do • For each j let ωj(t) be an estimation of E [ỹ(t)(j)] to an error of no more than OPT n2. • Let s(t) be a maximal-weight independent set in M according to the weights ωj(t). • Set y(t + ) = y(t) +  ∙ 1s(t), t  t +  • Return y(1)

  24. Continuous Greedy Algorithm - Demonstration y(0.04) y(0.03) y(0.02) y(0.01) y(0) Next steps • Analyzing the algorithm’s performance. • Surveying the implementation the algorithm.

  25. Lemma 1 Lemma 1 Let OPT = maxs (S). Consider any y  [0, 1]X, then: • Proof • Consider a specific optimal solution O , and any given value of ỹ. • By the submodularity of F we know that: • By taking the expectation over ỹ we get:

  26. By monotonicity, since: • Pr[j  ỹ(t + )] = yj(t) + Δj(t) • Pr[j ỹ(t)  D(t)] = 1 - (1-yj(t))(1-Δj(t))  yj(t) + Δj(t) Lemma 2 Lemma 2 Let y be the fractional solution found by the Continuous Greedy Algorithm, then: • Proof • Consider a specific time t, and let Δ(t) = y(t + ) – y(t) =  ∙ 1s(t). • D(t) - a random set containing each item j with probability Δj(t). Then: • Roadmap • We want to lower bound the increase in F in time t. • Taking small enough , we can ignore the contribution from D(t)’s which are not singletons.

  27. Considering only singleton sets By the inequality: (1 - )k ≥ (1 - k) Lemma 2 (cont.) Lower bound the increase in F at time t Taking advantage of s(t) properties • We chose s(t) as the independent set maximizing: • However ωj(t) is an estimation of E [fỹ(j)] up to an error of OPT n2. Therefore:

  28. By Lemma 1 Defining: Lemma 2 (cont.) Continuing the bound derivation Corollary The distance to diminishes by a factor of (1 - ) at each step.

  29. Lemma 2 (cont.) Warping up the proof After all 1/ steps has been performed we get: F is always positive, therefore: Algorithm Analysis - Summary Let y be the fractional solution returned by the Continuous Greedy Algorithm. • y is within P(M) since it is the convex combination of 1/ = n2 characteristic vectors of independent sets from M. • The value of F in y is: F(y) ≥ (1 – 1/e – o(1))OPT

  30. Chernoff bound Union bound (n3 estimations) Continuous Greedy Algorithm - Implementation Problem Two steps of the Continuous Greedy Algorithm are not strait forward to implement: • Calculating ωj(t), the estimation of E [fỹ(t)(j)], to an error of no more than OPT n2. • Finding a maximum weight independent set in M according to the weights ωj(t). Implementing the first step w.h.p. • Each time E [fỹ(t)(j)] has to be estimated: • Perform n5 independent samples. • Use the average as the estimation ωj(t). • Notice that fỹ(t)(j) ≤ f({ j }) ≤ OPT (unless j S for any S   ). The probability of |ωj(t) – E [fỹ(j)]| > OPT/n2 is exponentially small W.h.p. all estimations are not off by more than OPT/n2

  31. Finding Maximal Weighted Independent Set Instance • Matroid M = (X,  ) • Weight function w on the elements of X Objective Find an independent set S  , maximizing w(S). The Greedy Algorithm • Sort the elements of X in non-decreasing weight order: j1, j2, …, jn. • Start with S =  • For k = 1 to n do: • If S  {jk}  , add jk to S

  32. An old fellow • Reconsider the forest sub-graphs matroid: • The edges of the graph are elements of the matroid. • The independent sets are forests. • Rewriting the greedy algorithms in terms of this matroid yields: Translated Greedy Algorithm • Sort the edges of the graph in non-decreasing weight order: e1, e2, …, em. • Start with F =  • For k = 1 to m do: • If F  {ek} does not contain circles, add ek to F Kruskal’s algorithm

  33. Elements we will consider Ok Sk Elements we considered Greedy Algorithm - Correctness Notation Sk – The set S after the kth element is considered. S0 – The set S before the first element is considered, i.e. . Lemma 3 For every 0 ≤ k ≤ n, Sk Ok for some optimal set Ok, and j1, j2, …, jk Ok - Sk. Corollary For k = n lemma 3 implies: • Sn On • j1, j2,…, jn  On – Sn On Sn Therefore the result of the algorithm (Sn) is the optimal set On.   Now it all boils down to proving lemma 3.

  34. Lemma 3 – Proof • Proof Overview • The proof is by induction of k. • Induction base - for k = 0: •  = S0  O0 for any optimal set O0 • No item must not be in O0-S0 • Induction step • Prove the lemma for k, assuming it holds for k – 1: • If jk is not inserted into S, set Ok = Ok-1 then: • Sk = Sk-1 Ok-1 • j1, j2, …, jk-1 Ok-1 - Sk-1 • jk  Ok-1 - Sk-1(because if jk  Ok-1, then Sk-1  {jk}  Ok-1, contradicting Sk-1  {jk}   ) • If jk is inserted into S, then we need to construct a matching Ok.

  35. Lemma 3 – Proof (cont.) Construction of Ok • Initially Ok = Sk • While |Ok| < |Ok-1| do • Find j Ok-1 – Oksuch that Ok  { j }   • Set Ok Ok { j } Construction Correctness • Line 3 can be implemented because M is a matroid. • Ok = Ok-1 {jk} – {jh} for some jh • Ok is an optimal set: • jh Ok-1 – Sk-1 h k w(jh) ≤ w(jk)  w(Ok-1) ≤ w(Ok) • However, since we know that Ok-1 is a maximum weight independent set: w(Ok-1) = w(Ok). • j1, j2, …, jk Ok – Sk: • Holds for j1, j2, …, jk-1 because Ok - Ok-1 = {jk} • Holds for jk because jk Sk

  36. Milestone Achievement We showed how to find a fractional solution y such that: • F(y)  (1 – 1/e – o(1))OPT • y  P(M) What’s next? • We need to convert y into an integral solution. • The integral solution must be the characteristic vector of an independent set of M. • The conversion should not decrease the value of the solution significantly (in expectation).

  37. Back To Integral Solutions

  38. Rounding in the Submodular Welfare Case SWP to SMSMC Reduction - Remainder • Groundset: X = P Q • Given a set S  X, Sj is the set of items allocated to player j. Function : 2X + is defined as: • The matroid M = (X,  ) allows only solutions assigning each item to at most one player: Notation • Let yij be the extent to which the j item is allocated to the ith player in y, i.e. the value of the pair (i, j) in y. • Since y P(M), we know that: Each item is allocated to an extent  1

  39. Rounding Procedure Value Preservation • Let Ri be the set of items allocated to the ith player. • Notice that Ri has the same distribution as the set of items allocated to the ith player by ỹ. • The expected utility of the ith player is: • This is also the contribution of the ith player to F(y): Procedure • For each item j, randomly allocate it to a player, the probability of assigning it to the ith player is yij. • This procedure is guaranteed to generate a valid integral solution, because each item is allocated to at most one player.

  40. Rounding in the General Case • The rounding procedure presented crucially depends on: • The structure of the specific matroid. • The linearity of F (in terms of the utility functions). • In the general case we need to use a stronger rounding method: Pipage rounding. • Unlike randomized rounding, Pipage rounding preserves constraints. • Pipage rounding was firstly proposed by Ageev and Sviridenko in 2004.

  41. Rank Functions and Tight Sets Definition • Consider a matroid M = (X,  ). • The rank function rM(A): 2X Ninduced by M is: rM(A) = max {|S| | S  A, S   } • Informally, rM(A) maps each set to the size of the maximal independent set in it. Lemma The rank function induced by a matroid M is submodular. Proof • Consider two sets A,B  X. • Let SAB be a maximal independent set in AB. • Extend SAB to maximal independent sets SA and SB in A and B, respectively. • Extend SA to a maximal independent set SAB in AB. SAB SA SB SAB

  42. SAB SA SB SAB Rank Functions and Tight Sets (cont.) Proof - Continuation • SAB = SA  SB • |SAB|  |SA  SB|, otherwise: • |SAB – (SA - SAB)| > |SB| • |SAB – (SA - SAB)|  B, otherwise SA is not maximal. • Contradiction, because SB is maximal. • |SAB| + |SAB|  |SA  SB| + |SA  SB| = |SA| + |SB| Observation Given a vector y  P(M), and a set A  X: Why? It holds for every independent set, therefore it must also hold for convex combination of independent sets. Definition Given a vector y  P(M), a set A  X is tight if:

  43. Rank Functions and Tight Sets (cont.) Lemma Let A and B be two tight sets, then: • The intersection A  B is a tight set. • The union A  B is a tight set. Proof • By the submodularity of rM: • It can be easily checked that: • Implying: • Due to the observation:

  44. Ready. Set. Algorithm Preliminaries Assumption We assume X is tight set under y. Otherwise: • y  P(M) is a convex combination of independent sets 1,2,…,p. • Replace each set j with the maximal cardinality set containing it (its size must be rM(X)). • y remains in P(M). • Since F is monotone, F(y) does not decrease. Notation • yij() - The vector y with yi increased by  and yj decreased by . • yij+ = yij(ij+(y)),ij+(y) = max{ 0 | yij()  P(M)} • yij- = yij(ij-(y)),ij-(y) = min{≤ 0 | yij()  P(M)}

  45. The Pipage Rounding Algorithm Input • A matroid M = (X,  ) • Vector y such that X is tight. Pseudo Code • While y is not integral do • Let A be a minimal tight set containing i,j A, such that yi,yjare fractional. • If F(yij+)  F(yij-) • then y yij+ • else y yij- • Output y Next steps • Analyzing the algorithm’s performance. • Describing how to implement the algorithm (sketch).

  46. Pipage Rounding is Well Defined Aim We need to show that there is always a valid set A, as long as y is fractional. Observation If a tight set A contains a non-integral value then it contains at least two fractional values in it because: Corollary • All we only need to show is that there is a tight set containing fractional value. • Consider the set X: • X is tight, the sum do not change throughout the algorithm. • X contains all items  It must include a fractional value.

  47. Contradiction Pipage Rounding is a Rounding 3.141592653 Lemma The algorithm converges to an integral solution within O(n2) iterations. Proof • For simplicity, assume that the algorithm chooses minimal tight set of minimal cardinality. • Let A1, A2,…, An be n sets chosen by the algorithm in n consecutive iterations. • Assume no value of y becomes integral in the iterations corresponding to A1, A2,…, An-1, then: • We will prove |A1| > |A2| > … > |An|. • |An| 2 • |A1|  n • Therefore at least one additional value of y must become integral after every n-1 iterations.

  48. Pipage Rounding is a Rounding (cont.) Aim • Consider the iteration of Ai for some 1  i  n-1. • We want to show |Ai+1| < |Ai| Proof • The matroid polytopes can be equivalently defined as: • Since no value of y becomes integral, there must be a set B  X which becomes tight, and prevents us from going further. • Either i  B or j  B, but not both, otherwise does not change. • Consider Ai  B: • It is the intersection of two tight sets, therefore it is also tight. • It contains a fractional value. • |Ai+1|  |Ai  B| < |Ai|

  49. Pipage Rounding Preserve the Value Lemma The value of F(y) do not decrease during the rounding. Proof • We need to show that one of the following holds: • F(yij+)  F(y) • F(yij-)  F(y) • To do that we only need show F is concave in the direction: • Let us replace yi by yi+t and yj by yj-tin the definition of F. • By the submodularity of  we get:

  50. Pipage Rounding - Implementation • The set A and the values ij+(y), ij-(y) can be computed in polynomial time using known methods. • The values F(yij+) and F(yij-) can be approximated to an error polynomially small in n, by averaging over a polynomial number of samples: • The pipage rounding only loose a factor of (1 – o(1)) in each iteration w.h.p. • Using enough samples, the complete pipage rounding only loose a factor of (1 – o(1)) w.h.p, because it only makes O(n2) iterations. Rounding - Summary • Using the pipage rounding algorithm we get a valid integral solution for SMSMC. • The approximation ratio is:

More Related