1 / 33

Delay Reduction via Lagrange Multipliers in Stochastic Network Optimization

Delay Reduction via Lagrange Multipliers in Stochastic Network Optimization. Longbo Huang Michael J. Neely EE@USC. WiOpt 2009. *Sponsored in part by NSF Career CCF and DARPA IT-MANET Program. Outline. Problem formulation

kylee-vang
Download Presentation

Delay Reduction via Lagrange Multipliers in Stochastic Network Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Delay Reduction via Lagrange Multipliers in Stochastic Network Optimization Longbo Huang Michael J. Neely EE@USC WiOpt 2009 *Sponsored in part by NSF Career CCF and DARPA IT-MANET Program

  2. Outline • Problem formulation • Backlog behavior under Quadratic Lyapunov function based Algorithm (QLA): an example • General backlog behavior result of QLA for general SNO problems • The Fast-QLA algorithm (FQLA) • Simulation results • Summary

  3. Problem Description: A Network of r Queues Slotted Time, t=0,1,2,… S(t) = Network State, Time-Varying, IID over slots (e.g. channel conditions, random arrivals, etc.) x(t) = Control Action, chosen in some abstract set X(S(t)) (e.g. power/bandwidth allocation, routing) (S(t), x(t)) costs: f(t)=f(S(t), x(t)) generates:Aj(t)=gj(S(t), x(t)) packets to queue j serves:μj(t)=bj(S(t), x(t)) packets in queue j [f(), g(), b() are only assumed to be non-negative, continuous, bounded] The stochastic problem: minimize: time average cost subject to: queue stability.

  4. Problem Description: A Network of r Queues Slotted Time, t=0,1,2,… S(t) = Network State, Time-Varying, IID over slots (e.g. channel conditions, random arrivals, etc.) x(t) = Control Action, chosen in some abstract set X(S(t)) (e.g. power/bandwidth allocation, routing) (S(t), x(t)) costs: f(t)=f(S(t), x(t)) generates:Aj(t)=gj(S(t), x(t)) packets to queue j serves:μj(t)=bj(S(t), x(t)) packets in queue j [f(), g(), b() are only assumed to be non-negative, continuous, bounded] The stochastic problem: minimize: time average cost subject to: queue stability. QLA achieves: [G-N-T FnT 06] Avg. cost: fav <= f*av + O(1/V) Avg. Backlog: Uav <= O(V)

  5. An Energy Minimization Example: The QLA algorithm μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) U2 U3 U4 U1 U5 Goal: allocate power to support the flow with min avg. energy expenditure, i.e.: Min: avg. ΣiPi s.t. Queue stability S1(t) S2(t) S3(t) S4(t) S5(t) U2 W23(t) U3 Link 2->3 The QLA algorithm (built on Backpressure): 1. Compute the differentiable backlog Wii+1(t)=max[Ui(t)-Ui+1(t), 0], 2. Choose (P1(t), …P5(t) that maximizes: Σi[Wii+1(t)μi(Pi(t)) -VPi(t)] =Σi[Wii+1(t) Si(t) –V]Pi(t) e.g., if S2(t)=2, then if W23(t)*2>V, we set P2(t)=1.

  6. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, first 100 slots: U1 U2 size U3 U4 U5 time

  7. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, first 500 slots: U1 U2 size U3 U4 U5 time

  8. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, first 1000 slots: U1 U2 size U3 U4 U5 time

  9. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, first 5000 slots: U1 U2 size U3 U4 U5 time

  10. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, (U1(t), U2(t)): (500,400) t=1:500k

  11. An Energy Minimization Example: Backlog under QLA μ1(t) μ2(t) μ3(t) μ4(t) μ5(t) R(t) Goal: Min: avg. ΣiPi s.t. Queue stability U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Queue snapshot under QLA with V=100, (U1(t), U2(t)): (500,400) t=1:500k t=5k:500k

  12. General result: Backlog under QLA Theorem 1: If q(U) satisfies C1: for some L>0 independent of V, then under QLA, in steady state, U(t) is mostly within O(log(V)) distance from UV* = Θ(V). Implications: (1) Delay under QLA is Θ(V), not just O(V); (2) The network stores a backlog vector ≈UV*.

  13. General result: Backlog under QLA Theorem 1: If q(U) satisfies C1: for some L>0 independent of V, then under QLA, in steady state, U(t) is mostly within O(log(V)) distance from UV* = Θ(V). Implications: (1) Delay under QLA is Θ(V), not just O(V); (2) The network stores a backlog vector ≈UV*. Let’s “subtract out” UV* from the network! Replace most of the UV* data with Place-Holder bits

  14. Fast-QLA (FQLA): Using place-holder bits A single queue example: First idea: (1) choose number of place-holder bits Q, s.t., if U(t0)>=Q, then U(t)>=Q for all t>=t0. (2) Let U(0)=Q, run QLA. Start here

  15. Fast-QLA (FQLA): Using place-holder bits A single queue example: First idea: (1) choose number of place-holder bits Q, s.t., if U(t0)>=Q, then U(t)>=Q for all t>=t0. (2) Let U(0)=Q, run QLA. actual backlog Advantage: delay reduced by Q, same utility performance. Start here reduced

  16. Fast-QLA (FQLA): Using place-holder bits A single queue example: First idea: (1) choose number of place-holder bits Q, s.t., if U(t0)>=Q, then U(t)>=Q for all t>=t0. (2) Let U(0)=Q, run QLA. actual backlog ≈Θ(V) Advantage: delay reduced by Q, same utility performance. Start here reduced Problem: Q ≈ UV*-Θ(V), delay Θ(V).

  17. Fast-QLA (FQLA): Using place-holder bits A single queue example: FQLA idea: Choose # of place-holder bits Q such that backlog under QLA rarely goes below Q. Problem: (1) U(t) will eventually get below Q, what to do? (2) How to ensure utility performance?

  18. Fast-QLA (FQLA): Using place-holder bits A single queue example: FQLA idea: Choose # of place-holder bits Q such that backlog under QLA rarely goes below Q. Problem: (1) U(t) will eventually get below Q, what to do? (2) How to ensure utility performance? Answer: use virtual backlog process W(t) + careful pkt dropping

  19. Fast-QLA (FQLA): Using place-holder bits A single queue example: FQLA: (1) Choose # of place-holder bits Q such that backlog under QLA rarely goes below Q. (2) Use a virtual backlog process W(t) with W(0)=Q to track the backlog that should have been generated by QLA. (3) Obtain action by running QLA based on W(t), modify the action carefully.

  20. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  21. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  22. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  23. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  24. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  25. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Modifying the action

  26. Fast-QLA (FQLA): Using place-holder bits A single queue example: If W(t)>=Q, same as QLA, admit A(t), serve μ(t), i.e., FQLA=QLA. If W(t)<Q, serve μ(t), only admit: A’(t)=max[A(t)-(Q-W(t)), 0]. This modification ensures: U(t) ≈ max[W(t)-Q, 0]. Now choose: Q=max[UV*-log2(V), 0] (1) ensures: Low delay: average U ≈ log2(V), (2) ensures W(t) rarely below Q, implying: Good utility & few pkt dropped: very few action modifications. Modifying the action

  27. Fast-QLA (FQLA): Performance Theorem 2: If condition C1 in Theorem 1 holds, then we have under FQLA-Ideal: Recall: under QLA:

  28. Simulation R(t) • Simulation parameters: • V= 50, 100, 200, 500, 1000, 2000, • Each with 5x106 slots, • UV*=(5V, 4V, 3V, 2V, V)T. U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Backlog % of pkt dropped

  29. Simulation R(t) • Simulation parameters: • V= 50, 100, 200, 500, 1000, 2000, • Each with 5x106 slots, • UV*=(5V, 4V, 3V, 2V, V)T. U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Sample (W1(t),W2(t)) process: V=1000, t=10000:110000 Note: W1(t)>Q1=4952 & W2(t)>Q2=3952

  30. Simulation R(t) • Simulation parameters: • V= 50, 100, 200, 500, 1000, 2000, • Each with 5x106 slots, • UV*=(5V, 4V, 3V, 2V, V)T. U2 U3 U4 U1 U5 S1(t) S2(t) S3(t) S4(t) S5(t) Quick comparison V=1000, U QLA≈15V=15000 U FQLA≈5log2(V)=250 60times better! Backlog % of pkt dropped

  31. Summary • Under QLA, the backlog vector usually stays close to an “attractor” – the optimal Lagrange multiplier UV*. • FQLA subtracts out the Lagrange multiplier from the system induced by QLA by using place-holder bits to reduce delay.

  32. Summary • Under QLA, the backlog vector usually stays close to an “attractor” – the optimal Lagrange multiplier UV*. • FQLA subtracts out the Lagrange multiplier from the system induced by QLA by using place-holder bits to reduce delay. Note: (1) Theorem 1 also holds when S(t) is Markovian, (2) FQLA-General for the case where UV* is not known, performance similar to FQLA-Ideal, (3) when q0(U) is “smooth”, we prove O(sqrt{V}) deviation bound, (4) The “Network Gravity” role of Lagrange multiplier. Details see ArXiv report 0904.3795

  33. Thank you ! Questions or Comments?

More Related