1 / 43

TCP Nice: A Mechanism for Background Transfers

TCP Nice: A Mechanism for Background Transfers. Arun Venkataramani, Ravi Kokku, Mike Dahlin Laboratory of Advanced Systems Research(LASR) Computer Sciences, University of Texas at Austin. What are background transfers?. Data that humans are not waiting for

natara
Download Presentation

TCP Nice: A Mechanism for Background Transfers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP Nice: A Mechanism for Background Transfers Arun Venkataramani, Ravi Kokku, Mike Dahlin Laboratory of Advanced Systems Research(LASR) Computer Sciences, University of Texas at Austin Department of Computer Sciences, UT Austin

  2. What are background transfers? • Data that humans are not waiting for • Non-deadline-critical • Unlimited demand • Examples • Prefetched traffic on the Web • File system backup • Large-scale data distribution services • Background software updates • Media file sharing Department of Computer Sciences, UT Austin

  3. Desired Properties • Utilization of spare network capacity • No interference with regular transfers • Self-interference • applications hurt their own performance • Cross-interference • applications hurt other applications’ performance Department of Computer Sciences, UT Austin

  4. Goal: Self-tuning protocol • Many systems use magic numbers • Prefetch threshold [DuChamp99, Mogul96, …] • Rate limiting [Tivoli01, Crovella98, …] • Off-peak hours [Dykes01, many backup systems, …] • Limitations of manual tuning • Difficult to balance concerns • Proper balance varies over time • Burdens application developers • Complicates application design • Increases risk of deployment • Reduces benefits of deployment Department of Computer Sciences, UT Austin

  5. TCP Nice • Goal: abstraction of free infinite bandwidth • Applications say what they want • OS manages resources and scheduling • Self tuning transport layer • Reduces risk of interference with FG traffic • Significant utilization of spare capacity by BG • Simplifies application design Department of Computer Sciences, UT Austin

  6. Outline • Motivation • How Nice works • Evaluation • Case studies • Conclusions Department of Computer Sciences, UT Austin

  7. Why change TCP? • TCP does network resource management • Need flow prioritization • Alternative: router prioritization + More responsive, simple one bit priority • Hard to deploy • Question: • Can end-to-end congestion control achieve non-interference and utilization? Department of Computer Sciences, UT Austin

  8. Traditional TCP Reno • Adjusts congestion window (cwnd) • Competes for “fair share” of BW (=cwnd/RTT) • Packet losses signal congestion • Additive Increase • no losses  Increase cwnd by one packet per RTT • Multiplicative Decrease: • one packet loss (triple duplicate ack) halve cwnd • multi-packet loss  set cwnd = 1 • Problem: signal comes after damage done Department of Computer Sciences, UT Austin

  9. TCP Nice • Proactively detects congestion • Uses increasing RTT as congestion signal • Congestion  incr. queue lengths  incr. RTT • Aggressive responsiveness to congestion • Only modifies sender-side congestion control • Receiver and network unchanged • TCP friendly Department of Computer Sciences, UT Austin

  10. TCP Vegas • Early congestion detection using RTTs • Additive decrease on early congestion • Restrict number of packets maintained by each flow at bottleneck router • Small queues, yet good utilization Department of Computer Sciences, UT Austin

  11. TCP Nice • Basic algorithm • 1. Early Detection  thresh. queue length incr. in RTT • 2. Multiplicative decrease on early congestion • 3. Maintain small queues like Vegas • 4. Allow cwnd < 1.0 • per-ack operation: • if(curRTT > minRTT + threshold*(maxRTT – minRTT)) numCong++; • per-round operation: • if(numCong > f.W) W  W/2 else { … Vegas congestion control } Department of Computer Sciences, UT Austin

  12. Reno Add * Add * Mul + Add * Nice Add + Mul + Mul + Add * t.B m pkts B minRTT = t maxRTT = t+B/m Nice: the works • Non-interference getting out of the way in time • Utilization maintaining a small queue Department of Computer Sciences, UT Austin

  13. Evaluation of Nice • Analysis • Simulation micro-benchmarks • WAN measurements • Case studies Department of Computer Sciences, UT Austin

  14. Theoretical Analysis • Prove small bound on interference • Main result – interference decreases exponentially with bottleneck queue capacity, independent of the number of Nice flows • Guided algorithmic design • All 4 aspects of protocol essential • Unrealistic model • Fluid approximation • Synchronous packet drop Department of Computer Sciences, UT Austin

  15. TCP Nice Evaluation • Micro-benchmarks (ns simulation) • test interference under more realistic assumptions • Near optimal interference • 10-100X less than standard TCP • determine if Nice can get useful throughput • 50-80% of spare capacity • test under stressful network conditions • test under restricted workloads, topologies Department of Computer Sciences, UT Austin

  16. TCP Nice Evaluation • Measure prototype on WAN + test interference in real world + determine if throughput is available to Nice • limited ability to control system • Results • Nice causes minimal interference • standard TCP – significant interference • Significant spare BW throughout the day Department of Computer Sciences, UT Austin

  17. NS Simulations • Background traffic: • Long-lived flows • Nice, rate limiting, Reno, Vegas, Vegas-0 • Ideal: Router prioritization • Foreground traffic: Squid proxy trace • Reno, Vegas, RED • On/Off UDP • Single bottleneck topology • Parameters • spare capacity, number of Nice flows, threshold • Metric • average document transfer latency Department of Computer Sciences, UT Austin

  18. Network Conditions 1e3 100 10 V0 Document Latency (sec) Reno Nice 1 Vegas Router Prio 0.1 1 10 100 Spare Capacity • Nice causes low interference even when there isn’t much spare capacity. Department of Computer Sciences, UT Austin

  19. 1e3 Vegas 100 Reno 10 V0 Document Latency (sec) Nice 1 Router Prio 0.1 1 10 100 Num BG flows Scalability • W < 1 allows Nice to scale to any number of background flows Department of Computer Sciences, UT Austin

  20. 8e4 Vegas Reno 6e4 Router Prio 4e4 V0 BG Throughput (KB) Nice 2e4 0 1 10 100 Num BG flows Utilization • Nice utilizes 50-80% of spare capacity w/o stealing any bandwidth from FG Department of Computer Sciences, UT Austin

  21. Case study 1 : Data Distribution • Tivoli Data Exchange system • Complexity • 1000s of clients • different kinds of networks – phone line, cable modem, T1 • Problem • tuning data sending rate to reduce interference and maximize utilization • Current solution • manual setting for each set of clients Department of Computer Sciences, UT Austin

  22. 350 Manual tuning 300 Nice point Austin to London 250 200 Ping latency (in ms) 150 100 50 20 40 60 80 100 120 140 160 180 Completion time(in seconds) Case study 1 : Data Distribution Department of Computer Sciences, UT Austin

  23. Case study 2: Prefetch-Nice • Novel non-intrusive prefetching system [USITS’03] • Eliminates network interference using TCP Nice • Eliminates server interference using simple monitor • Readily deployable • no modifications to HTTP or the network • JavaScript based • Self-tuning architecture, end-to-end interference elimination Department of Computer Sciences, UT Austin

  24. Case study 2: Prefetch-Nice Hand-tuned threshold Response time Aggressive threshold + Self-tuning protocol Aggressiveness • A self-tuning architecture gives optimal performance for any setting Department of Computer Sciences, UT Austin

  25. Prefetching Case Study Department of Computer Sciences, UT Austin

  26. Prefetching case study Department of Computer Sciences, UT Austin

  27. Conclusions • End-to-end strategy can implement simple priorities • Enough usable spare bandwidth out there that can be Nicely harnessed • Nice makes application design easy Department of Computer Sciences, UT Austin

  28. TCP Nice Evaluation Key Results • Analytical • interference bounded by factor that falls exponentially with bottleneck queue size independent of no. of Nice flows • all 3 aspects of algorithm fundamentally important • Microbenchmarks (ns simulation) • near optimal interference • interference up to 10x less than standard TCP • TCP-Nice reaps substantial fraction of spare BW • Measure prototype on WAN • TCP Nice: Minimal interference • standard TCP: Significant interference • significant spare BW available throughout the day Department of Computer Sciences, UT Austin

  29. 20 16 I » O(e-B/m.(1-t)) 12 Document Latency (sec) 8 4 0 0 0.2 0.4 0.6 0.8 1 Threshold Freedom from thresholds • Dependence on threshold is weak - small threshold )low interference Department of Computer Sciences, UT Austin

  30. Manual tuning • Prefetching – attractive, restraint necessary • prefetch bandwidth limit [DuChamp99, Mogul96] • threshold probability of access [ Venk’01] • pace packets by guessing user’s idle time [Cro’98] • prefetching during off-peak hours [Dykes’01] • Data distribution / Mirroring • tune data sending rates on a per-user basis • Windows XP – Background (BITS) • rate throttling approach • different rates  different priority levels Department of Computer Sciences, UT Austin

  31. Goal: Self-tuning protocol • Hand-tuning may not work • Difficult to balance concerns • Proper balance varies over time • Burdens application developers • Complicates application design • Increases risk of deployment • Reduces benefits of deployment Department of Computer Sciences, UT Austin

  32. Reno Add * Add * Add * Vegas Add * Add + Add + Nice Add * Add + Mul + t.B m pkts B minRTT = t maxRTT = t+B/m Nice: the works • Nested congestion triggers: • Nice < Vegas < Reno Department of Computer Sciences, UT Austin

  33. TCP Vegas Expected E = W / minRTT A = W /observedRTT Diff = (E – A)/minRTT Diff m Actual a Throughput b 1/minRTT Window W* if(Diff < a / minRTT) W Ã W + 1 elseif(Diff > b / minRTT) W Ã W - 1 • Additive decrease when packets queue Department of Computer Sciences, UT Austin

  34. Nice features • Small a, b not good enough. Not even zero! • Need to detect absolute queue length • Aggressive backoff necessary • Protocol must scale to large number of flows • Solution • Detection ! ‘threshold’ queue build up • Avoidance ! Multiplicative backoff - Scalability ! window is allowed to drop below 1 Department of Computer Sciences, UT Austin

  35. Long Nice and Reno flows Interference metric: flow completion time Internet measurements Department of Computer Sciences, UT Austin

  36. Long Nice and Reno flows Interference metric: flow completion time Extensive measurements over different kinds of networks - Phone line - Cable modem - Transcontinental link (Austin to London) - University network (Abilene: UT Austin to Delaware) Internet measurements Department of Computer Sciences, UT Austin

  37. Internet measurements • Long Nice and Reno flows • Interference metric: flow completion time Department of Computer Sciences, UT Austin

  38. Long Nice and Reno flows Interference metric: flow completion time Internet measurements Department of Computer Sciences, UT Austin

  39. Austin to London 25 Reno 20 Nice 15 Response Time (sec) 10 5 0 May 10 May 11 May 12 May 13 May 14 May 15 Time of day Internet measurements • Useful spare capacity throughout the day Department of Computer Sciences, UT Austin

  40. Reno Cable modem 300 Nice 250 200 150 Response Time (sec) 100 50 0 May 13 May 14 May 15 May 16 Time of day Internet measurements • Useful spare capacity throughout the day Department of Computer Sciences, UT Austin

  41. NS Simulations - Red Department of Computer Sciences, UT Austin

  42. NS Simulations - Red Department of Computer Sciences, UT Austin

  43. Analytic Evaluation • All three aspects of protocol essential • Main result – interference decreases exponentially with bottleneck queue capacity irrespective of the number of Nice flows • fluid approximation model • long-lived Nice and Reno flows • queue capacity >> number of foreground flows Department of Computer Sciences, UT Austin

More Related