1 / 30

Quality of Service in California K-20 Networking

Quality of Service in California K-20 Networking. Dave Reese A Gathering of State Networks April 30, 2001. Quietly on the Sidelines. What traffic is most important? Video (of course) Voice (is this really coming?) Research, Business, Admissions transactions? (depends on who decides)

gene
Download Presentation

Quality of Service in California K-20 Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quality of Service inCalifornia K-20 Networking Dave Reese A Gathering of State Networks April 30, 2001

  2. Quietly on the Sidelines • What traffic is most important? • Video (of course) • Voice (is this really coming?) • Research, Business, Admissions transactions? (depends on who decides) • Can’t just create one queue, everyone will demand special treatment • How many queues are needed (practical)? • How to prioritize multiple queues? • Will there really be a National QoS (and what will be the cost)?

  3. What are we waiting for? • Bandwidth guarantees - like ATM CBR • Stable router software (does this exist?) • Reservations, limits/controls on usage • Method to decide who gets to use • Who enforces/patrols usage? • New planning/forecasting tools for network design

  4. What California is doing now • Building shared Statewide “Intranet” to serve research, education, and business applications for K-20 • Keeping intra-state bandwidth ahead of demand • Using ATM to guarantee quality for video conferences/distance education • Bringing critical applications to the Intranet and off of the Internet

  5. How is this working? • Only buying time - we want to move from ATM to IP • Bottleneck is between campus and backbone, backbone and Internet • Pilot project for “eContent” management to push multimedia servers closer to the user

  6. Quality of Service

  7. OneNet Network Infrastructure

  8. OneNet Member Utilization • Over 1,600 Connections • As of October 2000 • 100% • Colleges, Universities and Career Technology Centers • Court Systems • 80% • Public Schools (K-12) • 1,000+ Additional Sites

  9. Member Circuits (November 2000) DepartmentCircuits Higher Education 90 K-12 489 Career Technology Centers 65 Army National Guard 52 Courts 47 Hospitals (Gov’t/Private) 43 Law Enforcement 18 Libraries 107 Municipalities 28 Non-Profits 28 State Agencies 505

  10. Some Services DEMANDING We Address QoS • Video Conferencing • H.323 • MPEG • Video Streaming • P2P • Napster • Gnutella • All the rest… • FTP

  11. Technology Timeline

  12. It All Adds Up Quickly • Examples • We now have over 800 H.323 endpoints registered as distance learning classrooms • Every higher education institution is wiring their dorms or building new dorms to be wired. • Local expertise in many of our members’ networks regarding traffic management is somewhat limited, new hip applications can quickly congest links.

  13. Identifying The Causes • SNMP • Falls short in classification • Sniffers • Deployment is costly/difficult in the wider area • NetFlow • Can be utilized anywhere you have the capability to export flow information and have the time to wait for results

  14. FlowScan • Identify applications • Identify networks • Identify protocols • http://net.doit.wisc.edu/~plonka/FlowScan/

  15. Recent Specific Issue • Congestion at T1 level has been handled very well until recently with just WFQ. • Load-balanced per-packet overhead T1s at some hubsites are becoming congested • Distance-learning is our primary concern at these locations

  16. Current Solution • Congested T1s moved to a PQ-WFQ scenario via ‘ip rtp priority’ • Not ideal, RTP traffic of any sort can starve out other activities. Fortunately not an issue in the troubled locations • Load-balanced T1s moved to per-destination PQ-WFQ scenario • Adding in queuing with per-packet balancing introduced greater out-of-sequence issues than many endpoints could handle • Max bandwidth available to a flow is now constrained to a single T1 • MOVE to greater bandwidth! • WRED used on DS3s and greater

  17. Current Work • In the lab… • CAR • Start policing some applications to provide more assurance • NBAR • Anything we can do to help automate identification of what is going on in order to make classification simpler. • DiffServ, RSVP • Watching the Qbone and other I2 initiatives • MPLS • Traffic engineering not QoS but integral in many of the decisions we have to make

  18. Issues • Quality of Service is Managed Unfairness • Many decisions to be made about what is rate limited, what is dropped, what gets prioritized • How do we check our trusts on pre-marked traffic?

  19. QUALITY OF SERVICE & TRAFFIC MANAGEMENT Ben Colley http://www.more.net ben@more.net (800) 509-6673

  20. The Problem • Like elsewhere, Napster started it all. • Expanded from a traffic limiting need to a traffic prioritization goal. • Excess recreational traffic was impacting production services • Growth in bandwidth requirements still exceeds available funding

  21. The Project • What solutions exist: • At the backbone level?, • At the customer edge • via the router? • via other devices? • Goals • QoS - Ensure delivery of mission critical traffic • TM - Provide tools enabling local traffic management policies

  22. QoS Direction • Implement “Differentiated Services” in core and edge routers • Mark “state level” applications as top priority in edge router • H.323 traffic to/from MOREnet MCU farm • Library Automation traffic to/from server farm • Other future applications, eg VoIP • MOREnet will not mark or remark any other traffic • Campuses can mark other traffic as desired at the source device, or elsewhere in their network • Implement for all H.323 sites this summer

  23. QoS Alphabet Soup • At the network core (current best thinking) • Modified Deficit Round Robin (MDRR) • But still to determine queue mapping and forwarding strategy! • At the customer premise (current best thinking) • CAR and WFQ • CAR to ensure marking of “state level” application traffic • WFQ to forward appropriately • Technical meeting in May • Establish a common DiffServ Code Point (DSCP) strategy and, queue mapping and forwarding plan

  24. Traffic Management • Mark packets for QoS (and unmark!) • Policy administration by: • physical network interface • server or workstation network address • application signature • Multiple network interfaces permit ability to: • isolate critical servers; load-balance servers, caches and/or intrusion detection devices • aggregate like kinds of traffic • (Future) API available for Time of Day policies

  25. Traffic Management Research • Several products reviewed • Many good, focused products available • Recommendation for the TopLayer AppSwitch • Multiple interfaces support broader range of network design and architecture opportunities • Excellent H.323 “flow” management • Commitment to enhancing application recognition • Commitment to expanding usability

  26. TM Implementation Strategy • Focus on sites that will experience congestion soon • Acquire & install in 1-2 lead sites and learn • Deploy to remaining sites throughout year • Vendor training and support • MOREnet supported product • Campus determines local policy and manages the platform • MOREnet only interested in “state level” services

  27. Deployment Plan • Implement QoS prior to beginning of summer school for lead sites. • Test through summer to be ready for fall. • Implement 2nd round of QoS in August prior to fall semester. • Traffic Management deployment will move as needed on customer-by-customer basis starting this summer.

  28. Lessons Learned • Still an emerging technology -- its not cookie cutter yet • And here we go with a state-wide deployment (again) • There will be bumps along the way, like: • Who gets to decide whose packets are important? • Build a “Community of Interest” • How one organization prioritizes traffic can have impact on another

  29. Lessons Learned (continued) • We believe future funding increases will be linked to ‘good stewardship’ of current funding • Ask us in six months what the real lessons were!

More Related