1 / 99

Congestion Control Algorithms of TCP in Emerging Networks

Congestion Control Algorithms of TCP in Emerging Networks. Ph.D. Defense by Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy May 04, 2006. Motivation. Why TCP Congestion Control ? Designed in early ’80’s Still the most predominant protocol on the net Continuously evolves

ros
Download Presentation

Congestion Control Algorithms of TCP in Emerging Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control Algorithms of TCPin Emerging Networks Ph.D. Defense by Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy May 04, 2006

  2. Motivation Why TCP Congestion Control ? • Designed in early ’80’s • Still the most predominant protocol on the net • Continuously evolves • IETF developing an RFC to keep track of TCP changes ! • Has “issues” in emerging networks • Identify problems • Propose solutions

  3. Outline • Robustness to non-congestion events • TCP-DCR • Efficiency in high-speed networks • LTCP with Fixed  • LTCP with Variable  • Proactive congestion avoidance • PERT • Interactions with each other • Robustness to non-congestion events • TCP-DCR • Efficiency in high-speed networks • LTCP with Fixed 

  4. Where We are ... • Robustness to non-congestion events • TCP-DCR • Efficiency in high-speed networks • LTCP with Fixed  • LTCP with Variable  • Proactive congestion avoidance • PERT • Interactions with each other

  5. Robustness to Non-Congestion Events : An Overview • TCP behavior : If three dupacks • retransmit the packet • reduce cwnd by half. • Problem : Not all 3-dupack events are due to congestion • channel errors in wireless networks • reordering etc. • Result : Sub-optimal performance

  6. Robustness to Non-Congestion Events : An Overview • TCP-DCR : The Proposed Solution • Delay the time to infer congestion by  • Essentially a tradeoff between wrongly inferring congestion and promptness of response to congestion •  chosen to be one RTT to allow maximum time while avoiding an RTO

  7. Robustness to Non-Congestion Events : An Overview • Extensive Evaluation • Analyses • NS-2 Simulation/Linux Emulation • only packet reordering, channel errors • only congestion • both packet reordering and congestion • Evaluation at multiple levels • Flow level (Throughput, relative fairness, response to dynamic changes in traffic etc.) • Protocol level (Packet delivery time, RTT estimates etc.) • Network level (Bottleneck link droprate, queue length etc.)

  8. Robustness to Non-Congestion Events : An Overview • Results • Significant improvement in link utilization in the presence of non-congestion events. • Minimal impact in the absence

  9. Robustness to Non-Congestion Events : An Overview • Publications • Peer-Reviewed Journal Sumitha Bhandarkar, Nauzad Sadry, A. L. Narasimha Reddy and Nitin Vaidya, “TCP-DCR: A Novel Protocol for Tolerating Wireless Channel Errors”, accepted for publication in IEEE Transactions on Mobile Computing (Vol. 4, No. 5), September/October 2005 • Peer-Reviewed Conference Sumitha Bhandarkar and A. L. Narasimha Reddy, "TCP-DCR: Making TCP Robust to Non-Congestion Events", Proceedings of Networking 2004, May 2004. • Standardization Sumitha Bhandarkar, A. L. Narasimha Reddy, Mark Allman and Ethan Blanton, "Improving the robustness of TCP to Non-Congestion Events", IETF Draft, work in progress, September 2005, http://tools.ietf.org/html/draft-ietf-tcpm-tcp-dcr-07.txt. Current Status: In RFC-Editor queue, waiting for publication

  10. Where We are ... • Robustness to non-congestion events • TCP-DCR • Efficiency in high-speed networks • LTCP with Fixed  • LTCP with Variable  • Proactive congestion avoidance • PERT • Interactions with each other

  11. TCP in High-speed Networks Motivation • Historically, high-speed links present only at the core • High levels of multiplexing (low per-flow rates) • Now, high-speed links are available for transfer between two endpoints • Eg. SLAC@Stanford has 500Mbps+ connection to Chicago, Switzerland etc.* • Low levels of multiplexing (high per-flow rates) *Source : http://www-iepm.slac.stanford.edu/bw/tcp-eval/

  12. TCP in High-speed Networks Motivation (Cont) TCP’s one per RTT increase does not scale well Multiplicative decrease by 0.5 too aggressive * For RTT = 100ms, Packet Size = 1500 Byte *Source : RFC 3649 10min, 1.5 hour

  13. TCP in High-speed Networks • Layered TCP (LTCP) : The Proposed Solution • Congestion control emulates “virtual flows” • Apply the concept of “layering” • When losses not observed for a period of time, increase layers • At higher layers, emulate larger number of virtual flows • Dynamic in nature, varies based on network conditions

  14. TCP in High-speed Networks LTCP Concepts • Layering • Start layering when window > WT • Associate each layer with a step size K • When window increases from previous addition of layer by K, increment number of layers • For each layer K, increase window by K per RTT Number of layers determined dynamically based on current network conditions.

  15. Minimum Window Corresponding to the layer Layer Number K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 Number of layers = K when WK W  WK+1 TCP in High-speed Networks LTCP Concepts (Cont.) K

  16. TCP in High-speed Networks Framework Constraint 1 : Rate of increase forflow at higher layer should be lower than flow at lower layer K + 2 WK+2 dK+1 K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 (K1 > K2, for all K1, K2  2) Number of layers = K when WK W  WK+1

  17. TCP in High-speed Networks Framework Constraint 2 : After a loss, recovery time for a larger flow should be more than the smaller flow Flow 1 : Slope = K1' 1 Window T1 Time Flow 2 : Slope = K2' 2 Window T2 (K1 > K2, for all K1, K2  2) Time

  18. TCP in High-speed NetworksOverview of LTCP with Fixed  Design Choice • Increase behavior : • Additive increase with additive factor = layer number W = W + K/W • Decrease behavior : • Multiplicative decrease with fixed  • At most one layer reduction after a decrease

  19. TCP in High-speed NetworksOverview of LTCP with Fixed  Choice of Parameters • Based on analysis to satisfy framework convergence constraints • Constraint on window thresholds • Constraint on decrease factor

  20. TCP in High-speed NetworksOverview of LTCP with Fixed  Choice of Parameters (cont.) • We choose • For window increase • For window decrease  = 0.15 (corresponds to K = 19) For flow with 100 ms RTT to fully utilize 1Gbps link using 1500 byte packets Can we remove this constraint to make LTCP more scalable?

  21. TCP in High-speed NetworksLTCP with Variable  Design Choice • Increase behavior (same as before) : • Additive increase with additive factor = layer number W = W + K/W • Decrease behavior : • Multiplicative decrease with variable  • At most one layer reduction after a decrease

  22. TCP in High-speed NetworksLTCP with Variable  Choice of Parameters • Based on analysis to satisfy framework convergence constraints • Constraint on window thresholds • Constraint on decrease factor

  23. TCP in High-speed NetworksLTCP with Variable  Choice of Parameters (cont.) • We choose • For window increase (to obtain WK similar to before) • For window decrease  = 1/3 (to obtain  = 0.5 when K = 1) For flow with 100 ms RTT to fully utilize 1Gbps link using 1500 byte packets

  24. TCP in High-speed Networks Other Analyses • Time to claim bandwidth • Same for both designs • For TCP, T(1) + (W - WT) RTTs (Assuming slow start ends when window = WT) • Packet recovery time is different for different designs • Other analyses presented during prelims

  25. TCP in High-speed Networks Improvement compared to TCP For flow with 100 ms RTT to fully utilize 1Gbps link using 1500 byte packets

  26. TCP in High-speed Networks • Summary of Results • Significant improvement in performance compared to TCP • Good fairness, convergence properties • RTT unfairness similar to TCP • Compared to other highspeed proposals, performance is similar or better

  27. TCP in High-speed Networks • Publications • Peer-Reviewed Journal Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, "LTCP: Improving the Performance of TCP in Highspeed Networks", ACM SIGCOMM Computer Communication Review, Volume 36 , Issue 1 , January 2006. • Peer-Reviewed Workshop Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, "Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control", in the proceedings of PFLDNet 2005 Workshop, February 2005.

  28. Where We are ... • Robustness to non-congestion events • TCP-DCR • Efficiency in high-speed networks • LTCP with Fixed  • LTCP with Variable  • Proactive congestion avoidance • PERT • Interactions with each other

  29. Proactive Congestion Avoidance Motivation • TCP behavior : • Additive increase until packet loss is observed • Reduce cwnd by half after a packet loss • Problem : • Bottleneck link buffers fill up • Result : • Self induced packet losses • Wasted resources for retransmission of lost packets

  30. Proactive Congestion Avoidance Background • Well understood problem • Explicit congestion notification by router • RED, REM, AVQ, BLUE etc. with ECN • Explicit rate control by router • XCP, VCP, EMKC, JetMax etc. • End-host based solutions • CARD, TRI-S, DUAL, VEGAS, CIM etc.

  31. Proactive Congestion Avoidance Background (cont.) • Router based solutions • Easier to determine the onset of congestion • Difficult to deploy • End-host based Solutions • Easier to deploy • Difficult to determine the onset of congestion • We propose an end-host based solution that emulates router-based solution

  32. Proactive Congestion Avoidance Background (cont.) End-host based prediction • Monitor throughput • Before link is full, throughput increases linearly with load • After link is full, throughput is constant at link capacity • Monitor Delay • Before link is full, delay is low • After link is full, delay increases *Source: [CARD]

  33. 1 3 5 4 6 2 Proactive Congestion Avoidance State Transition Diagram State A Low Cong- estion State B High Cong- estion State C Loss

  34. Proactive Congestion Avoidance • Measurement based studies claim… • End-host congestion avoidance not possible • Low correlation between RTT increase and loss • But loss measured at flow level • Absence of loss does NOT indicate absence of network congestion

  35. Proactive Congestion Avoidance Correlation between RTT and Loss

  36. Proactive Congestion Avoidance Correlation between RTT and Loss (Cont.)

  37. Proactive Congestion Avoidance Prediction Efficiency, False Positives & Negatives

  38. Proactive Congestion Avoidance • Measurement based studies claim… • End-host congestion avoidance not efficient • Responding to uncertain signal can cause more harm than good • Assume response is multiplicative decrease with factor 0.5 • Alternate response can be designed to provide robustness to uncertainties

  39. Proactive Congestion Avoidance • Designing the Response • False positives cannot be entirely eliminated • Response should be chosen such that impact of false positives can be reduced • When to respond ? • How to respond ? • How much response ?

  40. 5 5 5 7 3 Probabilistic ? 7 7 7 7 8 8 4 1 6 2 Proactive Congestion Avoidance State Transition Diagram with Response State A Low Cong- estion State B1 Response State B High Cong- estion State C Loss

  41. Proactive Congestion Avoidance Designing the Probabilistic Response

  42. Proactive Congestion Avoidance • Designing the Probabilistic Response • False positives decrease as queue length increases • Make probabilistic response a function of queue length • At lower queue length, probability of response is low • Probability of response increases as queue length increases • Conceptually, similar to RED/ECN • Emulate the RED probability curve • Use smoothed RTT for tracking queue length • Possible to emulate other AQM mechanisms also

  43. Proactive Congestion Avoidance Probabilistic Response Curve for PERT

  44. Proactive Congestion Avoidance • Probabilistic Early Response TCP (PERT) • Determine the appropriate prediction signal • improve the reliability of the prediction • Determine the appropriate response function • uncertainties in prediction are unavoidable • offset this uncertainty by making the response probabilistic • different AQM schemes can be emulated in the response - we choose RED/ECN

  45. Proactive Congestion Avoidance • Prediction signal used in PERT • RTT sample collected for every packet • Timestamp option used with high clock resolution • EWMA smoothing with weight 0.99 for history for eliminating noise • High prediction efficiency • Low false positives • Low false negatives

  46. Proactive Congestion Avoidance • Response function used in PERT • Probabilistic - emulates RED/ECN • Fixed values used for parameters • minthresh_ = 5ms • maxthresh_ = 10ms • maxP_ = 0.05 • Adaptive values possible (similar to adaptive RED) • At most once response per RTT

  47. Proactive Congestion Avoidance • Response function used in PERT (cont.) • Multiplicative decrease with fixed window reduction factor of 0.35 • Buffer size B related to window reduction factor f as • Generally buffer size is set to one BDP • If buffers do not exceed 0.5 BDP, f = 0.35 sufficient • If packet loss occurs, response similar to TCP • Window reduction by 0.5, fast retransmit/recovery.

  48. Proactive Congestion Avoidance • Experimental Evaluation • Extensive evaluation based on ns-2 simulations • Single bottleneck link • Bandwidth varied in the range [1Mbps, 1Gbps] • RTT varied in the range [10ms, 1s] • # long term flows varied in the range [1,1000] • # web sessions varied in the range [10,1000] • Multiple Bottleneck Link • Flows with different RTTs • Impact of sudden changes in traffic load

  49. PERT : Experimental Evaluation Varying the Bottleneck Link Bandwidth

  50. PERT : Experimental Evaluation Varying the RTT

More Related