1 / 29

Backbone Performance Comparison

Backbone Performance Comparison. Jeff Boote, Internet2 Warren Matthews, Georgia Tech John Moore, MCNC. Overview. We (in NC) were asked to compare the relative performance of various IP service providers Interest from both local CIOs and Internet2

Download Presentation

Backbone Performance Comparison

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Backbone Performance Comparison Jeff Boote, Internet2 Warren Matthews, Georgia Tech John Moore, MCNC

  2. Overview • We (in NC) were asked to compare the relative performance of various IP service providers • Interest from both local CIOs and Internet2 • We decided to measure relative end-to-end latency and jitter • Recruited a few other ITECs (Ohio and Texas) and GA Tech to help • Jeff Boote got interested since we were using owamp

  3. Method • Setup owamp machine at each site with multiple interfaces per NIC • Use host routes to force traffic to a specific destination via a specific provider • Create a mesh of these running continuously and dump results to a database • Add traceroute information to verify paths and look for routing changes

  4. Path types • Path will vary depending on whether source and destination sites share provider, or not. • Doesn’t take “natural” or policy routing into consideration, but useful for comparative purposes.

  5. As we progressed… • New paths became available… • VPLS (Layer 2 VLAN) between three of the ITECs (NC, OH and TX) • Described in sidebar • NLR PacketNet between NC and GT • Not all that interesting, since both sites attach to the same NLR router in Atlanta • Added NLR to new interface on same NIC, added VPLS to a separate NIC on the same machines • TAMU site setup and running, but o good data available yet • Had to remove host routes due to other routing changes going on locally

  6. Available Data from OWAMP • Latency • Latency variation (jitter ~ 95%-min) • TTL (num hops) • Duplicates • Loss • Reordering (not likely at 1 pps)

  7. OWAMP “sender” configuration • Each host has multiple virtual addresses configured (one per “network”) • Continuous stream of packets (1 pps - exp dist.) per network address “pair” • Traffic is directed onto specific network based on dest address Only last router before “backbone” shown

  8. LATAB(Traceroute when source is routed through Abilene) OH nms4-ipls NYCM IPLS CHIN KSCY NC WASH nms4-hstn HSTN ATLA nms4-wash TAMU GT

  9. LATQW (Traceroute when source is routed through Qwest) CHI-EDGE NC CHI-CORE DCA-CORE DCA-EDGE ATLA-CORE ATLA-EDGE OH GT

  10. LATL3 (Traceroute when source is routed through Level3) Qwest Washington Asymmetric routing: Northbound via Charlotte Southbound via Raleigh. Washington Washington Raleigh NC Unknown Charlotte Charlotte Atlanta OH Unknown Unknown GT ATLAL3

  11. LATO3 (Traceroute when source is routed through another provider - GT/Cogent) OH NC Qwest CORE ATLA GT

  12. LATNLR NC ATLA GT

  13. LATVPLS OH NC TAMU

  14. Preliminary Results • Small amount of data collected so far • Working on how best to visualize combination of pieces (latency, loss, routing changes, etc.) • Looking for “stability” metric (but stability is application dependent) • More analysis needed

  15. Loss overview

  16. NLR is lower latency. This is expected as GT and NC are connected to the same router. NC connection is backhauled via NLR L2 service. Qwest and Abilene go via Washington. The long way… For the Level3 path, there is an unidentified hop just before the GT campus. Rate limiter? Expected NLR and Level3 paths to be closer NC to GT Qwest NLR Level3 Abilene

  17. GT to NC • NLR and Level3 paths similar • Cogent hands off to Qwest to get to NC Qwest NLR Cogent Level3 Abilene

  18. Latency RangeNC to GT Level3 via Raleigh Level3 via Charlotte Input to GT is always longer?

  19. NC to OH • Marginally quicker across Qwest (via Washington and Chicago). • Abilene via New York, Chicago and Indianapolis. Qwest Level3 Abilene

  20. OH to NC • OH doesn’t use Level3, so no return path to NC via Level3 Abilene Qwest

  21. Latency RangeNC to OH No return path for L3_NC_OH

  22. GT to OH • Abilene more direct via Indianapolis • Qwest via Chicago • Cogent, Level3 hand off to Qwest Cogent Abilene Qwest Level3

  23. OH to GT • OH doesn’t use Level3, so no return path to GT via Level3 Qwest Abilene

  24. Latency RangeGT to OH No return path for L3_GT_OH.

  25. Summary • From a latency perspective, topology is the overriding parameter • So far we’re not seeing huge latency deltas between R&E and commodity between two endpoints • Loss in commodity networks is pretty good • They’ve improved in the last 10 years • Looking for a quality metric (stability?) to combine the things we can measure

  26. VPLS Sidebar • Virtual Private LAN Service - multipoint Ethernet service over IP/MPLS backbone • Created between ITECs as overlay on Abilene • PE routers sit in GigaPoP address space, interconnected via interdomain LSPs • Abilene T640s are P routers

  27. VPLS Overview • Full Mesh of LSPs • BGP for inter-PE communication • Ethernet encapsulation at PE-CE

  28. View from Ohio To NC To TX No routers!

  29. View from NC PE Local NC MAC address OH MAC address TX MAC address

More Related