1 / 21

Split-TCP: State of the Union Address

Split-TCP: State of the Union Address. Dan Berger 03/03/03. Adgenda. Proxied or “Split” TCP Conceptual Overview Previous Investigation Current Work Results and Analysis Conclusion. Conceptual Overview. The Problem(s) TCP can’t distinguish link failure from congestion.

Download Presentation

Split-TCP: State of the Union Address

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Split-TCP: State of the Union Address Dan Berger 03/03/03

  2. Adgenda • Proxied or “Split” TCP Conceptual Overview • Previous Investigation • Current Work • Results and Analysis • Conclusion

  3. Conceptual Overview • The Problem(s) • TCP can’t distinguish link failure from congestion. • Results in lowered throughput • Effect is magnified on longer (hop-count) connections

  4. Concepts (Cont.) • The Proposal • Designate nodes along the path to be packet-level proxies. • These proxies buffer segments until they’ve been received (and acknowledged) by the next proxy (or destination) • If the packet isn’t ack’d in “reasonable” time – they resend.

  5. Details, Details… • Proxy selection must be distributed and dynamic, to accommodate mobility in the network. • The proxies should be as stateless as possible – specifically, keeping per-flow state is bad.

  6. Previous Investigation • In a simple string topology “Split-TCP is able to provide a better overall throughput than TCP, up to about 15%” • In a complex topology with mobility, “the total throughput improves with the use of proxies by about 5% to about 30%”

  7. Current Work • The initial goal was to implement a linux kernel implementation. • This work was stymied by a lack of detailed understanding of the desired behavior of the protocol. • Work done last quarter found issues with the current simulation model which suggested it needed to be revisited.

  8. Forward, march! • We decided to turn our attention to building a more accurate and usable simulation model in ns. • The specific issues in the existing model we wanted to address were: • Lack of end-to-end window management • Lack of dynamic proxy selection

  9. s t Proxy-To-Proxy Window Mgmt. • The initial simulation model literally decomposed a long TCP connection into multiple shorter connections. • This had undesired (and unforeseen) consequences.

  10. Static Proxy Selection • Additionally, this decomposition was done using global routing knowledge. • To accommodate mobility, the proxy selection code was re-run periodically. • This meant that new TCP connections were established, and the old ones allowed to “drain.”

  11. The New Model • The new simulation model consists of three components: • SplitTCPSource • SplitTCPWedge • SplitTCPSink • The Source and Sink are simply specializations of TCPSource and TCPSink, respectively.

  12. SplitTCPWedge • In order to perform dynamic proxy selection, one needed to be able to inspect each packet received by a node. • Then the decision to proxy can be based on whatever criteria seem appropriate. • All “Split-TCP enabled” nodes run the wedge code.

  13. An Anonymous NS Node

  14. Better, Stronger, Faster SplitTCPWedge

  15. Model Summary • This new simulation model uses a TCP session established between the source and target, and dynamic proxy selection is performed along the path. s t

  16. End-To-End Window Mgmt. • The TCP session performs window management “as usual” – but control can be tweaked both manually as well as by the Wedge. • For instance, end-to-end ACKs can be skipped, the congestion window can be managed based on inter-proxy ACKs, etc.

  17. Proxy Selection • Each node decides, independently, (currently based on hops since last proxy and their current backlog) if they should proxy a packet. • Proxying does not keep per-flow state. • Retransmissions are scheduled based on the current inter-proxy round trip time estimate.

  18. Results • Current simulations show a  10% degradation in throughput vs. Standard TCP. • With parameter tweaking, this degradation varies, but Split TCP never equals standard TCP.

  19. Analysis • One identified cause of the dramatically different behavior of the old and new simulation models relates to out-of-order packet delivery. • In the old model, the packet stream was “reassembled” at each proxy – and bytes were only forwarded once they arrived in order. • Per-flow state at each proxy. • In the new model, this is not the case.

  20. Parameterization • By tuning various parameters – including initial congestion window, window growth factors, etc – the throughput of Split TCP can be improved. • There are over 150 tuning parameters in the “stock” NS TCP code. If they were all just binary parameters that’s still 2150 possible configurations!

  21. Conclusion • We’re trying to determine rational next steps: • Do we continue the search for the “magic” set of parameters? • Do we examine the behavior of the new model in scenarios with mobility? • Few answers, many more questions.

More Related