1k likes | 1.24k Views
Games in Networks: Routing, Network Design, Potential Games, and Equilibria and Inefficiency. Éva Tardos Cornell University. Part I. what is a game? Pure and randomized equilibria Load balancing and routing as games.
E N D
Games in Networks: Routing, Network Design, Potential Games, and Equilibria and Inefficiency Éva Tardos Cornell University
Part I • what is a game? • Pure and randomized equilibria • Load balancing and routing as games
Users with a multitude of diverse economic interests sharing a Network (Internet) browsers routers servers Selfishness: Parties deviate from their protocol if it is in their interest Model Resulting Issues as Games on Networks Why care about Games?
A simple game: load balancing Each job wants to be on a lightly loaded machine. With coordination we can arrange them to minimize load Example: load of 4 1 2 3 2 machine 1 machine 2
1 2 A simple game: load balancing Each job wants to be on a lightly loaded machine. • Without coordination? • Stable arrangement: No job has incentive to switch • Example: some have load of 5 3 2
Games: setup • A set of players (in example: jobs) • for each player, a set of strategies (which machine to choose) Game: each player picks a strategy For each strategy profile (a strategy for each player) a payoff to each player (load on selected machine) Nash Equilibrium:stable strategy profile:where no player can improve payoff by changing strategy
Games: setup Deterministic (pure) or randomized (mixed) strategies? Pure: each player selects a strategy. simple, natural, but stable solution may not exists Mixed: each player chooses a probability distribution of strategies. • equilibrium exists (Nash), • but pure strategies often make more sense
Pure versus Mixed strategies in load balancing • Pure strategy: load of 1 • A mixed equilibrium Expected load of 3/2 for both jobs 1 1 1 1 50% 50% 50% 50% Machine 1 Machine 2
Quality of Outcome: Goal’s of the Game • Personal objective for player i: min load Li or expected load E(Li) • Overall objective? • Social Welfare: i Li or expected value E(i Li ) • Makespan: maxi Li or max expected value maxi E(Li) or expected makespan E(maxi Li )
1 1 1 1 1 1 1 n Example: simple load balancing n identical jobs and n machines All pure equilibria: load of 1 (also optimum) A mixed equilibrium: prob 1/n each machine expected load: E(Li)= 1+(n-1)<2 for each i E(maxi Li ): balls and bins: log n/log log n
Results on load balancing: Theorem for E(maxi Li ): • w/uniform speeds, p.o.a ≤ log m/log log m • w/general speeds, worst-case p.o.a. is Θ(log m/log log log m) Proof idea: balls and bins is worst case?? Requence of results by [Koutsoupias/Papadimitriou 99], [Mavronicolas/Spirakis 01], [Koutsoupias/Mavronicolas/Spirakis 02], [Czumaj/Vöcking 02]
Today: focus on pure equilibria Does a pure equilibria exists? Does a high quality equilibria exists? Are all equilibria high quality? some of the results extend to sum/max of E(Li)
Routing network: ℓe(x) =x x 1 0 s s t t 1 x 1 load balancing and routing Load balancing: Delay as a function of load: x unit of load causes delay ℓe(x) ℓe(x) =x jobs machines Allow more complex networks
r=1 x 1 0 s t x 1 Atomic vs. Non-atomic Game 80% • Non-atomic game: • Users control an infinitesimally small amount of flow • equilibrium: all flow path carrying flow are minimum total delay r=1 Atomic Game: • Each user controls a unit of flow, and • selects a single path or machine x 1 0 s t x 1 20% Both congestion games: cost on edge e depends on the congestion (number of users)
x Flow = .5 s t 1 Flow = .5 Example of nonatomic flow on two links • One unit of flow sent from s to t Traffic on lower edge is envious. x Flow = 1 An envy free solution: s t 1 No-one is better off Flow = 0 • Infinite number of players • will make analysis cleaner by continuous math
Braess’s Paradox Original Network x 1 .5 .5 s t Cost of Nash flow = 1.5 .5 .5 x 1 Added edge: x 1 .5 .5 0 s .5 t .5 x 1 Effect?
x 1 1 0 s t 1 1 x 1 Braess’s Paradox Original Network x 1 .5 .5 s t Cost of Nash flow = 1.5 .5 .5 x 1 Added edge: • Cost of Nash flow = 2 • All the flow has increased delay!
x 1 .5 .5 s t 1 x .5 .5 Model of Routing Game • A directed graph G = (V,E) • source–sink pairs si,ti for i=1,..,k • rate ri 0 of traffic between si and ti for each i=1,..,k r1 =1 • Load-balancing jobs wanted min load • Here want minimum delay: • delay adds along path • edge-delay is a function ℓe(•) of the load on the edge e
x 1 .5 .5 s t 1 x .5 .5 Delay Functions r1 =1 Assume ℓe(x) continuous and monotone increasing in load x on edge No capacity of edges for now Example to model capacity u: ℓe(x)= a/(u-x) ℓe(x) x u
Goal’s of the Game • Personal objective: minimize • ℓP(f) = sum of latencies of edges along P (wrt. flow f) • No need for mixed strategies • Overall objective: • C(f) = total latency of a flowf: = P fP•ℓP(f) • =social welfare
x 1 s t x 1 Routing Game?? Flow represents • cars on highways • packets on the Internet • individual packets or small continuous model • User goal:Find a path selfishly minimizing user delay • true for cars, • packets?: users do not choose paths on the Internet: routers do! • With delay as primary metric router protocols choose shortest path!
Connecting Nash and Opt • Min-latency flow • for one s-t pair for simplicity • minimize C(f) = e fe• ℓe(fe) • subject to: f is an s-t flow • carrying r units • By summing over edges rather than pathswhere fe = amount of flow on edge e
.5 1 x 0 s t x 1 .5 Characterizing the Optimal Flow • Optimality condition:all flow travels along minimum-gradient paths gradient is:(x ℓ(x))’ = ℓ(x)+x ℓ’(x)
.5 1 x 0 s t x 1 .5 Characterizing the Optimal Flow • Optimality condition:all flow travels along minimum-gradient paths gradient is:(x ℓ(x))’ = ℓ(x)+x ℓ’(x) Recall: flow f is at Nash equilibrium iff all flow travels along minimum-latency paths
Nash Min-Cost Corolary 1: min cost is “Nash” with delay ℓ(x)+x ℓ’(x) • Corollary 2:Nash is ‘’min cost’’ with cost • Ф(f) = e 0feℓe(x) dx • Why? • gradient of: • (0feℓe(x) dx )’ = ℓ(x)
Using function Ф • Nash is the solution minimizing Ф Theorem (Beckmann’56) • In a network latency functionsℓe(x) that are monotone increasing and continuous, • a deterministic Nash equilibrium exists, and is essentially unique
Using function Ф (con’t) • Nash is the solution minimizing value of Ф • Hence, Ф(Nash) < Ф(OPT). Suppose that we also know for any solution Ф ≤ cost ≤ A Ф • cost(Nash) ≤ A Ф(Nash) ≤ A Ф(OPT) ≤Acost(OPT). There exists a good Nash!
Example: Ф ≤ cost ≤ A Ф Example:ℓe(x)=x then • total delay is x·ℓe(x)=x2 • potential is ℓe() d = x2/2 More generally: linear delay ℓe(x) =aex+be • delay on edge x·ℓe(x) = aex2+be x • potential on edge: ℓe() d = aex2/2+be x • ratio at most 2 Degree d polynomials: • ratio at most d+1
Sharper results for non-atomic games Theorem 1 (Roughgarden-Tardos’00) • In a network with linear latency functions • i.e., of the formℓe(x)=aex+be • the cost of a Nash flow is at most 4/3 times that of the minimum-latency flow
x Flow = .5 s t 1 Flow = .5 Sharper results for non-atomic games Theorem 1 (Roughgarden-Tardos’00) • In a network with linear latency functions • i.e., of the formℓe(x)=aex+be • the cost of a Nash flow is at most 4/3 times that of the minimum-latency flow r=1 x 1 0 s t x 1 Nash cost 1optimum 3/4 Nash cost 2optimum 1.5
Braess paradox in springs (aside) Cutting middle string makes the weight rise and decreases power flow along springs Flow=power; delay=distance
Bounds for spring paradox Theorem 1’ (Roughgarden-Tardos’00) In a network with springs and strings cutting some strings can increase the height by at most a factor of 4/3. Cutting middle string
General Latency Functions • Question: what about more general edge latency functions? • Bad Example:(r = 1, d large) A Nash flow can cost arbitrarily more than the optimal (min-cost) flow xd 1 1- s t 1 0
Sharper results for non-atomic games Theorem 2 (Roughgarden’02): • In any network with any class of convex continuous latency functions • the worst price of anarchy is always on two edge network Corollary: price of anarchy for degree d polynomials is O(d/log d). x x 1- 1 s t s t 1 1 0
Another Proof idea Modify the network • Add a new fixed delay parallel edge • fixed cost set = ℓe(fe) • Nash not effected • Optimum can only improve ℓe(x) Nash: fe fe ℓe(x) ℓ(x)=
Modified Network ℓe(x) Nash: e • fixed cost set = ℓe(fe) • Optimum on modifiednetwork splits flow so that marginal costs are equalized • and common marginal cost is = ℓe(fe) fe ℓe(x) fe-e ℓ(x)=
Proof of better bound ℓe(x) Nash: e • Theorem 2:the worst price of anarchy is always two edge network • Proof: Prize of anarchy on G is median of ratios for the edges fe ℓe(x) fe-e ℓ(x)=
cost of Nash with rates ri for all i cost of opt with rates 2ri for all i More results for non-atomic games Theorem 3 (Roughgarden-Tardos’00): • In any network with continuous, nondecreasing latency functions Proof …
Proof of bicriteria bound ℓe(x) Nash: e common marginal cost on two edges in opt is = ℓe(fe) • Proof: Opt may cost very little, but marginal cost is as high as latency in Nash • Augmenting to double rate costs at least as much as Nash fe ℓe(x) fe-e ℓ(x)=
cost of Nash with rates ri for all i cost of opt with rates 2ri for all i More results for non-atomic games Theorem 3 (Roughgarden-Tardos’00): • In any network with continuous, nondecreasing latency functions Morale for the Internet: build for double flow rate
Morale for IP versus ATM? Corollary:with M/M/1 delay fns: ℓ(x)=1/(u-x), where u=capacity Nash w/cap. 2u opt w/cap. u Doubling capacity is more effective than optimized routing (IP versus ATM)
Part II • Discrete potential games: • network design • price of anarchy stability
Continuous Potential Games Continuous potential game:there is a function(f) so that Nash equilibria are exactly the local minima of also known as Walrasian equilibrium convex then Nash equilibrium are the minima. For example Ф(f) = e 0feℓe(x) dx
Each user controls one unit of flow, and selects a single path Theorem Change in potential is same as function change perceived by one user [Rosenthal’73, Monderer Shapley’96,] (f) =e (ℓe(1)+…+ ℓe(fe)) =e e Even though moving player ignores all other users Discrete Analog Atomic Game t s t s
Theorem Change in potential is same as function change perceived by one user [Rosenthal’73, Monderer Shapley’96,] (f) =e (ℓe(1)+…+ ℓe(fe)) =e e Potential: Tracking Happiness Potential before move: ℓe(1)+… ℓe(fe -1) + ℓe(fe) + ℓe’(1)+…+ ℓe’(fe’) Reason? e e’
Theorem Change in potential is same as function change perceived by one user [Rosenthal’73, Monderer Shapley’96,] (f) =e (ℓe(1)+…+ ℓe(fe)) =e e Potential: Tracking Happiness Potential after move: ℓe(1)+… ℓe(fe -1) + ℓe(fe) + ℓe’(1)+…+ ℓe’(fe’) + ℓe’(fe’+1) Change in is -ℓe(fe)+ ℓe’(fe’+1) same as change for player Reason? e e’
What are Potential Games Discrete potential game:there is a function(f) so that change in potential is same as function change perceived by one user Theorem [Monderer Shapley’96]Discrete potential games if and only if congestion game (cost of using an element depends on the number of users). Proof of “if” direction (f) =e (ℓe(1)+…+ ℓe(fe)) Corollary: Nash equilibria are local min. of (f)
Why care about Best Nash/Opt ratio? Papadimitriou-Koutsoupias ’99 Nash = outcome of selfish behavior • worst Nash/Opt ratio: Price of Anarchy Non-atomic game: Nash is unique… Atomic Nash not unique!
Best Nash is good quality… cost of best selfish outcome Price of Stability= “socially optimum” cost cost of worst selfish outcome Price of Anarchy= “socially optimum” cost Potential argument Low price of stability But do we care?
Atomic Game: Routing with Delay Theorems 1&2 true for the Nash minimizing the potential function, assuming all players carry the same amount of flow Worst case on 2 edge network