1 / 21

Maximizing End-to-End Network Performance

Maximizing End-to-End Network Performance. Thomas Hacker University of Michigan October 26, 2001. Introduction. Applications experience network performance from a end customer perspective Providing end-to-end performance has two aspects Bandwidth Reservation Performance Tuning

kovit
Download Presentation

Maximizing End-to-End Network Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MaximizingEnd-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001

  2. Introduction • Applications experience network performance from a end customer perspective • Providing end-to-end performance has two aspects • Bandwidth Reservation • Performance Tuning • We have been working to improve actual end-to-end throughput using Performance Tuning • This work allows applications to fully exploit reserved bandwidth

  3. Improve Network Performance • Poor network performance arises from a subtle interaction between many different components at each layer of the OSI network stack • Physical • Data Link • Network • Transport • Application

  4. TCP Bandwidth Limits – Mathis Equation • Based on characteristics from physical layer up to transport layer. • Hard Limits • TCP Bandwidth, Max Packet Loss

  5. Packet Loss and MSS • If the minimum link bandwidth between two hosts is OC-12 (622 Mbps), and the average round trip time is 20 msec, the maximum packet loss rate necessary to achieve 66% of the link speed (411 Mbps) is approximately 0.00018%, which represents only 2 packets lost out of every 100,000 packets. • If MSS is increased from 1500 bytes to 9000 bytes (Jumbo frames), limit on TCP BW will rise by a factor of 6.

  6. The Results

  7. Web100 Collaboration

  8. Parallel TCP Connections…a clue SOURCE: Harimath Sivakumar, Stuart Bailey, Robert L. Grossman. “PSockets: The Case for Application-level Network Striping for Data Intensive Applications using High Speed Wide Area Networks,” SC2000: High-Performance Network and Computing Conference, Dallas, TX, 11/00

  9. Why Does This Work? • Assumption is that network gives best effort throughput for each connection • But end-to-end performance is still poor, even after tuning the host, network, and application • Parallel Sockets are being used in GridFTP, Netscape, Gnutella, Atlas, Storage Resource Broker, etc.

  10. Packet Loss • Bolot* found that Random losses are not always due to congestion • local system configuration (txqueuelen in Linux) • Bad cables (noisy) • Packet losses occur in bursts • TCP throttles transmission rate on ALL packet losses, regardless of the root cause • Selective Acknowledgement (SACK) helps, but only so much * Jean-Chrysostome Bolot. “Characterizing End-to-End packet delay and loss in the Internet.”, Journal of High Speed Networks, 2(3):305--323, 1993.

  11. Expression for Parallel Socket Bandwidth

  12. Number of Connections Aggregate Bandwidth 1 100 50 Mb/sec 2 100+100 100 Mb/sec 3 100+100+100 150 Mb/sec 4 4 (100) 200 Mb/sec 5 5 (100) 250 Mb/sec Example MSS = 4418, RTT = 70 msec, p = 1/10000 for all connections

  13. Measurements • To validate theoretical model, 220 4 minute transmissions performed from U-M to NASA AMES in San Jose, CA • Bottleneck was OC-12, MTU=4418 • 7 runs MSS=4366, 1 to 20 sockets • 2 runs MSS=2948, 1 to 20 sockets • 2 runs MSS=1448, 1 to 20 sockets • Iperf used for transfer, Web100 used to collect TCP observations on sender side

  14. Actual: MSS 1448 Bytes

  15. Actual: MSS 2948 Bytes

  16. Actual: MSS 4366 Bytes

  17. Observations • Knee in the curve is the point at which aggregate throughput reaches the delay*bandwidth product of the pipe between sender and receiver. • On a long link (trans-Atlantic), the pipe can hold a lot of data, since the delay (RTT) is so large. • Parallel sockets should only work if there are are no router drops. TCP will try to fill the pipe on a single stream.

  18. Sunnyvale – Denver Abilene Link Initial Tests Yearly Statistics

  19. Abilene Weather Map

  20. Other stuff • Internet2 measurements on Cleveland Abilene node • Interesting results from ns-2 simulation – is it just a simulation artifact? • Working on a loss model for Abilene that differentiates between router drops and random drops

  21. Conclusion • High Performance Network Throughput is possible with a combination of host, network and application tuning along with using parallel TCP connections • Parallel TCP Sockets mitigate negative effects of packet loss in random congestion regime • Effects of Parallel TCP Sockets similar to using larger MSS • Using Parallel Sockets is aggressive, but as fair as using large MSS

More Related