370 likes | 1.11k Views
Data Center Design Using Avaya Fabric Connect. Carl DeVincentis. Evolution of the Data Center. Once, Campus-class was good enough. Traditional networks are designed for north/south traffic – ToR Switches interconnected by the Core or Aggregation…. 20-30 microseconds for every hop
E N D
Data Center Design Using Avaya Fabric Connect Carl DeVincentis
Evolution of the Data Center Once, Campus-class was good enough Traditional networks are designed for north/south traffic – ToR Switches interconnected by the Core or Aggregation… • 20-30 microseconds for every hop • Modern applications have an average of 8 transactions Rack 4 Rack 2 Rack 3 Rack 1
Evolution of the Data Center Traffic Patterns were Traditional Traditionally: The North-South to East-West ratio has been 80:20 Top-of-Rack Switches Racked Servers • What this meant: • Application traffic traverses multiple Switch hops – Access / Core / ToR / Core / Access • Uplinks were more important than Inter-Rack capacity
*Gartner, 2011 “By 2014, network planners should expect more than 80% of traffic in the Data Center’s local area network to be between Servers”* Your Data Center Network Is Heading for Traffic Chaos
How to address the new traffic needs Distributed ToR Distributed ToR is specifically designed for east/west traffic – ToR Switches directly interconnected… • DToR costs just 1 microseconds switch hop Web VM Web VM App VM D/B VM
Eliminate the bottlenecked uplinks Distributed ToR The future: East-West traffic will dominate Data Center traffic* – ‘the new 80%’ 20% Top-of-Rack Switches 80% Avaya delivers the industry’s only low-latency Distributed ToR solution Alternatives introduce latency & congestion, additional equipment, consume more ports Racked Servers * Gartner: ‘Your Data Center is heading for traffic chaos’ – April, 2011 • Now this means: • Server-to-Server, Rack-to-Rack traffic dramatically increases • Inter-Rack capacity is now crucial • Traditional designs introduce significant latency and degrade application performance
Scalability with Flexibility • Fabric Connect in the Data Center offers: • Architectural Flexibility • Small, Medium & Large data centers • Centralized and distributed • Layer 2 overlays with Layer 3 VRFs • Interoperates with non-Avaya platforms • Scale • Increasing L2 scale while decreasing L2 complexity and improving mobility • Distributed ToR utilizes multiple 40G uplinks • Eliminates the “choke points” of uplinks • Handles East-West traffic natively
Two operational modes for true deployment flexibility Distributed Top-of-Rack Fabric-mode DToR Flexible Interconnect: up to 500 Switches (16,000 10GbE Ports & 280Tbps) Row A Stack-mode DToR Structured Interconnect: 8 Switches (256 10GbE Ports & 5.12Tbps) Row B Row C Row D From small to very large – D-ToR can handle it!
VM Evolution – The Need For Scale & Flexibility Members of the same VM domain must remain in the same subnet Physical Servers Virtual Servers APPs I-P Per Rack APPs A-H Per Rack APP A APP B Rack 1 Rack 1 Rack 2 While application scale & server utilization is poor, network complexity is simple Server virtualization brings scale and efficiency to applications. VMs become essentially “jailed” and network flexibility is reduced. VM Mobility is seriously limited. What is the alternative…?
VM Evolution – The Need For Scale & Flexibility • Fabric Connect provides the VMs & therefore, the applications a “get out of jail free” card • No longer bound by physical & logical constraints, VMs can be moved anywhere • This creates the flexibility to improve availability, simplify maintenance and enhance application expansion Row A Row B Row C Rack 1 Rack 2 Rack 3 Rack 4 Jailbreak..! Put your VMs where you need them..!
Introducing the Virtual Services Platform 7000 Overview & Highlights • Perfect for Today • Versatile support for 1 or 10 Gigabit Ethernet • Distributed Top-of-Rack delivers Industry’s fastest virtual backplane • Fabric networking delivered directly to the Server • Media Dependent Adaptor flexibility • Lossless hardware & software architecture • Front-back or back-to-front cooling • Small form-factor & energy-efficient • Future-Ready for Tomorrow • Seamless integration of 40/100G • Data Centre Bridging-ready to integrate Storage Convergence • Wire-speed performance • Delivering mass 1/10 Gigabit today • Optimizes application performance • Future-proofed for 40/100 Gigabit & Storage convergence (SDSN) Highlights
VSP 7000 – Avaya FI (Fabric Interconnect) Sidney Duffy • Avaya FI (Fabric Interconnect) provides the new DNA for your Data Center Network. • Enabling unparalleled scalability and low latency interconnect to address East-West traffic requirements • FI Stacking supports 2-16 switches offering up to 512 10GE ports with 10.24Tbs Fabric. • FI Mesh supports 4-500 switches offering up to 16,000 10GE ports with 280Tbs Fabric. Avaya FI – the new DNA for your Data Center Network
VSP 7000 – Front Ports • Data RJ-45 Console Port • 24 ports of 1/10 Gigabit SFP+ • MDA Slot for flexible expansion • Comprehensive array of status LEDs • Unit Master Select switch • Out of Band 10/100/1000 Management Port • USB Host Port 8 port 10GigBaseT MDA 8 port SFP+ MDA 1HCY14 2 port 40Gbps MDA MDA Slot 24 SFP+ Ports RJ-45 Console Port USB Port Management Port * * Indicates future functionality ** Supports currently SFP/SFP+ & DAC (as per VSP 9000) Master Switch Status LEDs
VSP 7000 – Rear PortsModes of Operation • A significant challenge in many data centres is the growing amount of East-West traffic which requires higher bandwidth between adjacent racks in the data centre. • Each VSP 7000 features 4 Fabric Interconnect ports. • Fabric Interconnect can be used in two mutually exclusive modes: • Fabric Interconnect Stacking, in which up to 8 units create a vToR (virtual Top of Rack) or 16 units in a dToR (distributed Top of Rack) delivering up to 10Tbps using two clusters of 8 switches. • Rear Port Mode, in which two choices are available, Raw supporting SMLT/IST only over rear ports and SPB for Fabric Interconnect Mesh supporting SPB and SPB plus IST over rear ports. Fabric Interconnect Ports
Scaling Concerns ERS 8800 Fabric Connect Core VSP 9000 North-South/Core-ToR Interconnects VSP 7000 Distributed Data Center SDSN SDSN Distributed Top-of-Rack SDSN Stack-mode DToR Structured Interconnect: 8 Switches 256 10GbE Ports Fabric-mode DToR Flexible Interconnect: up to 200 Switches 6,400 10GbE Ports
Introducing Fabric ConnectLeveraging IEEE 802.1aq & IETF RFC 6329 Avaya’s optimized, end-to-end virtualized Ethernet • Built using multiple product options, and leveraging any topology • Configuration is simple – becoming automatically self-aware of all & best paths • Creates an end-to-end solution that is transparent to the physical layer • Creates a layer of end-to-end virtual Ethernet – perfect for L2 connectivity • Also creates a flexible Routing solution – mandatory for real-world L3 interconnectivity • Layer 2 services are deployed using simple edge-only, one-touch provisioning • Traditional end-to-end interconnectivity is supported by flexible Routing services • Services easily scale and extend – delivering any-to-any connectivity • Geo-redundant routing services ensure optimum performance and resource utilization • Additional services are deployed at will – a common ID links any and all end-points • Multi-Tenant scenarios are natively supported, leveraging the same, simplified provisioning model
VSP 7000 & VENA Distributed ToR • DToR delivers at least 20x more performance than competing solutions • Stack-mode scales up to 8 Switches & 5.12Tbps per vToR • Fabric-mode scales up to 500 Switches & 280Tbps • Reduces multi-hop server-to-server latency • Delivers maximum efficiency – all Server ports remain active
Fabric Interconnect Stacking • Up/Down port pairs are 320Gbps each pair for a total of 640Gbps per switch • Recommended Fabric Interconnect stacking configuration for odd and even stacks 38-40 34-36 37 33
Fabric Interconnect Stacking (SMLT to Core) • L2 Service and Inter-VSN routing supporting VRRP and Back-up Master for routing North-South and off-net traffic • 100% application uptime during switch maintenance through VM mobility
Fabric Interconnect Stacking (SPBM to Core) • Allows for SPB attached switch clusters and LAG attached servers • 100% application uptime during switch maintenance through use of SMLT
VSP 7000 vToR (Virtual ToR)Fabric Interconnect Stacking Reference ToR 32 Servers per Rack Non-Resilient Server connections 32 x10G ports per Rack / 256 10GE ports per Pod 5Tbps East-West Bandwidth
VSP 7000 dToR (distributed ToR)Fabric Interconnect Stacking Reference ToR Cluster 1 ToR Cluster 2 IST ` 32 Servers per Rack Dual Attached Fully Resilient 32 x10G ports per Rack / 512 10GE ports per Pod 10Tbps East-West Bandwidth
Fabric Interconnect Stacking Benefits • Allows for Fault-tolerant or Load sharing NIC teaming into stack • Up to 8 units create a vToR (virtual top-of-rack) delivering up to 5Tbps using two FI cables in parallel between switches for a total of 256 10GE ports • Up to 16 units in a dToR (distributed top-of-rack) delivering up to 10Tbps using two clusters of 8 switches for up to 512 10GE ports • High bandwidth and low latency between servers (2.1µs – 5.3us) • Highly resilient stacking technology with scalable uplinks • Flexibility to spread across multiple data cabinets (100s of servers) • Ideal for Grid Computing/High-Performance Computing Solutions
VSP 7000 Fabric InterconnectRear Port Speeds • The 4 physical Fabric Interconnect ports provide different bandwidth capabilities (see rear-ports feature for more details). • Three different colours are used to represent the different capabilities. Optimum performance will be achieved when ports of the same throughput are connected to one another. 38-40 34-36 37 33
Fabric Interconnect Mesh • Rear Port Mode Standard • SPB is not supported when the rear port mode is set to standard (raw) mode • The IST must be made up of a minimum of two ports, i.e. use both rear blue ports or you could use either the red or black rear ports as each has 3 ports each • LACP must be disabled prior to enabling IST or SMLT using the rear ports • SMLT square and triangle topologies are supported in rear port raw mode; SMLT full mesh is not supported • Rear Port Mode SPB • Only SPB L2VSNs are supported • For routing, the L2VSN must be terminated on at least one VSP 9000, VSP 4000, or ERS 8800 that either has IP Shortcuts or L3VSN enabled; it is recommended to terminate the L2VSN on a second VSP 9000, VSP 4000, or ERS 8800 and use VRRP • The IST must be made up of at minimum two port, i.e. use both rear port blue ports or you could use either the red rear ports (2 ports) or black rear ports (3 ports) • Please note that one of the red ports is used as a SPB loopback port, hence, the number of ports is reduced by one
Fabric Interconnect – Rear Port Mode SPB Switch stacking and clustering with SPB supported in 10.3
Recommended ISIS Metrics 35 35 35 35 5 5 3 3 9 9 9 9 5 5 3 3
Fabric Interconnect Mesh Benefits • FI mesh SPB scalability from 4 to 480 switches • Providing low latency fabric. • Fabric is scalable up to 262.5Tbps. • Extensible up to 15,360 10GE ports. • Designed to enable either 24/32/48/64 10GE ports per rack • Fabric Interconnect ports interconnect each VSP7000: • Can use mix of different Fabric Interconnect cable lengths. • Optimum performance will be achieved when the same color designated ports are connected to one another • Rear port • The rear ports provide up 640Gbps of connectivity • LACP and VLAN tagging is automatically enabled for the rear ports • The 4 physical Fabric Interconnect ports provide different bandwidth capabilities (see rear-ports feature for more details). • Three different colors are used to represent the different capabilities. Optimum performance will be achieved when the same colored ports are connected to one another.
Putting it all together Active/Active LAG dToR with SMLT and SPB Terminate VSP 7000 L2VSN on any of these nodes and enable IP Shortcuts for routing to the rest of the network Terminate the same L2VSN on a second node and enable VRRP Backup-Master for active/active resilience Add L2VSN between Data Center #1 and Data Center #2 for vMotion or other shared services FI Mesh with SMLT and rear port SPB Active/Active LAG
In Summary • Avaya has created a solution that leapfrogs the rest of the industry. • Solves the East-West DC traffic problems • vToR and dToR provides for massive scale • Eliminates traditional network limitations (STP & VLANs) • Allows FULL VM mobility without sacrificing performance • Additional benefits: • Provides PCI compliance without the high cost & expense • Improves Disaster Recovery to Business Continuance.
BEST OF ATF SPEAKER AND TEAM AWARD #AvayaATF Be sure to tweet your feedback on this presentation Winners will be announced at closing of event