1 / 52

Xsigo Systems Faster, Simpler, More Cost-Effective Server I/O David Lock dlock@xsigo

Xsigo Systems Faster, Simpler, More Cost-Effective Server I/O David Lock dlock@xsigo.com. About Xsigo. Founded 2004 Offices: San Jose, CA (Headquarters) Munich, New York, London, Tokyo Funding Kleiner Perkins , Greylock Partners, Khosla Ventures Juniper Networks

raven
Download Presentation

Xsigo Systems Faster, Simpler, More Cost-Effective Server I/O David Lock dlock@xsigo

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Xsigo Systems Faster, Simpler, More Cost-Effective Server I/O David Lock dlock@xsigo.com

  2. About Xsigo • Founded 2004 • Offices: • San Jose, CA (Headquarters) • Munich, New York, London, Tokyo • Funding • Kleiner Perkins, • Greylock Partners, • Khosla Ventures • Juniper Networks • Board members include: • Ray Lane: Former President, Oracle • Mark Leslie: Co-Founder, VERITAS • Vinod Khosla: Co-Founder, Sun • Ashok Krishnamurthi: Founding team at Juniper

  3. Awards and Press

  4. What Are Your Pain Points? • Consolidation • Data Center Agility • Power, Heat, and Cooling • Disaster Recovery • Cabling • Virtualization • Backup

  5. 1 SCALE SCALE SCALE Network Ethernet Storage SAN One OS FCP One OS FCP 2 CPU I/O CPU I/O Ethernet MEM Network MEM DISK Storage FC SAN VM1 VM1 VM1 VM1 FCP 3 Hypervisor 10-40Gb Ethernet CPU MEM Network Evolution of the Data Center

  6. Networking The Data Center Ecosystem Applications Servers Storage Virtual I/O

  7. Xsigo Reduces Complexity • Without Xsigo • With Xsigo 70% fewer cables, cards. Fewer edge switches.

  8. Xsigo at a Glance Remote management: Central management of configurations across all servers. . Predictable performance: QoS to specific vNICs or virtual machines. Zero downtime: Deploy connectivity to live servers. Bandwidth where you need it. 20Gb/s networks for backup, Vmotion, etc. No rip and replace: Standards based. Works with the gear you have.

  9. Fast & Scalable • 10Gb/s link to each server • QoS features for bandwidth allocation • Scalable to 120 servers • Up to 150 Gb/s to LAN and SAN • Configurable I/O to networks Hundreds of servers Leaf switches

  10. Open • Works with… Industry standard x86/x64 servers incl. blade servers Supports Windows 2003, Linux and VMware ESX today Virtual resources appear as physical NICs and HBAs to OS Server bus requirement: PCI-Express

  11. Problems which Xsigo Addresses

  12. Challenge : Constrained Datacenter • Server’s use limited by connectivity • Time consuming to re-purpose • Re-configuration of cards, cables, networks App Tier DB Tier Web Tier App SAN DB Web Cannot respond quickly to changing needs.

  13. Challenge : Agility • Time • Moves adds and changes take weeks. • Expense • Multiple teams involved in each change. • Risk • Re-cabling introduces risk of mechanical failure and human error. High costs. Response time challenge.

  14. More Agility Accelerates response time: Configure new server Bring up app Without Xsigo Reassign LAN connection Reassign SAN connection Weeks Move I/O. Restart. With Xsigo Time Saved Hours 100X faster management.

  15. Why I/O Drives Cost 5 Teams Needed: • Network complexity • Interdependencies abound • Example: • Application migration • Caused by: • Server upgrade • Maintenance • Recover under-used asset Security LAN Server Backup SAN Move app from A to B Core SAN Tape DMZ Backup

  16. How Xsigo Helps 1 5 Teams Needed: • Move I/O transparently • Benefits Management expense Deployment time Power, space, cooling Server manager works more efficiently Server Move app from A to B No config changes needed Core SAN Tape DMZ Backup 80% less operational expense

  17. Connectivity: After • Consolidate I/O. Storage and networking share one cable. • All servers can be configured with access to any network. 2 Cables per server: Mgmt Core Faster and less costly to deploy VMware across hundreds of servers. Save space with 1U servers, rather than 4U. Dev Vmotion Test SAN Tape DMZ Backup

  18. “Web Server” “Web Server” How It Works • Create virtual connectivity • Create I/O profile template • Migrate connectivity • Retains full identity of I/O

  19. How Xsigo Virtualizes I/O • Xsigo fabric extends I/O bus • Extends I/O bus from servers to the Xsigo I/O Director • Apart from that, the network remains the same • Self-contained • Fabric is fully self-contained • Management tools built in to the Xsigo system Self contained fabric LAN SAN Routers Directors

  20. Xsigo Network Benefits Before • Simplifies edge infrastructure • Extends I/O bus to the I/O Director • Consolidates edge networking • Not replacing Ethernet as network fabric • No change to core switch, VLAN, firewall, or SAN • Simple, consistent I/O management • Single point of management for configuration settings Complex Connectivity Edge Switches LAN Routers & Switches SAN Directors After Simple Connectivity Xsigo I/O Director SAN LAN Routers & Switches Directors

  21. Lower Cost Before After • Saves capital cost 4 ways: • 70% fewer I/O cards and cables • Enables more use of 1U high servers • No edge switches • More I/O on blades at less cost 256 VMs 4 racks, 4U servers $700K capital cost 256 VMs 1 rack, 1U servers $470K capital cost 33% to 50% capital cost savings.

  22. Xsigo Hardware

  23. VP780 I/O Director • Hardware-based architecture • Fully non-blocking fabric • 780 Gb/s aggregate bandwidth • Custom silicon • Line rate throughput • 24 Server ports • Expansion switch available for connection to hundreds of servers 4U height • 15 I/O module slots • 4 X 1Gb Ethernet • 1 x 10Gb Ethernet • 2 x 4Gb Fibre Channel System Control Processor Redundant Hot Swappable Fans Redundant Hot Swappable Power Supplies

  24. 4-port Gigabit Ethernet 10 Gigabit Ethernet Dual 4 Gigabit Fibre Channel VP780 Options I/O Modules 10-port Gigabit Ethernet Expansion Switch IS24 Expansion Switch

  25. Scalability Servers + Expansion Switch To VP780 I/O Director Xsigo Expansion Switch … … … … … Scalable to hundreds of servers

  26. SAN Xsigo Topology Xsigo VP780 Xsigo VP780 LAN Redundancy and scalability

  27. The Xsigo Difference Before After

  28. Relevance to VMware

  29. Why I/O Matters for Virtualization • 75% of users have five or more Ethernet ports per server. • 85% of users have two or more SAN ports per server. • 58% have had to add connectivity to a server specifically for VMs. • 65% consider cable reduction a priority. Virtualization drives I/O demand for users: 7 or more I/O ports per server! Virtualization drives significantly more I/O.

  30. Xsigo Benefits Guarantee bandwidth to specific VMs. Ensure application performance. Isolate traffic to each VM or any logical grouping. No need for “open zoning.” LAN SAN Virtual I/O Scale I/O as needs change. Integrated separate network for VMotion App1 App2 App3 vsw1 vsw2 vsw3 20Gb Consolidate infrastructure, reduce costs.

  31. Predictable Application Performance • Problem: • Ensure performance of critical applications. • Xsigo Solution • Integrated QoS • Provision critical applications with dedicated I/O. • Set QoS by virtual machine. • QoS remains with VM even through VMotion.

  32. Scalable I/O Without Xsigo With Xsigo • VMware recommends dedicated I/O for: • VMs running critical applications • VMotion • Management • Xsigo I/O scales to meet changing requirements • Same I/O resources: • Two cables. • Change in minutes. • Dedicated I/O resources: • Many cables. • Hours or days to modify. Mgmt Mgmt Core Core Dev Dev VMotion VMotion SAN Test Test SAN Tape Tape DMZ DMZ Backup Backup Add I/O resources whenever needed.

  33. Reduced Hardware Expense • Problem: • Large 4U servers required to accommodate I/O cards. • More power, space, cost. • Xsigo Solution • Use virtual NICs and HBAs. • Get the connectivity you need in a 1U high server. 4U ESX NIC Host 1 ESX NIC Host 2 ESX NIC Host 3 ESX NIC Host 4 ESX HBA A ESX HBA B ESX VMotion ESX Console ESX RIL 0 1U ESX NIC Host 1 ESX NIC Host 3 ESX HBA A ESX VMotion ESX RIL 0 ESX NIC Host 2 ESX NIC Host 4 ESX HBA B ESX Console

  34. Fast & Secure VMotion • Problem • VMotion traffic contains all data for VMs being moved. • Security risk of moving unencrypted VMotion data over open network. • Requires network bandwidth for rapid execution. • Xsigo Solution • Use Xsigo high-speed fabric to handle VMotion traffic. • Traffic does not hit LAN. • Fast 10 or 20Gb link. • VMotion traffic 2) Travels over 10Gb link LAN 3) Traffic does not hit LAN

  35. VM Mobility Xsigo enables VM mobility without the security risk of “open zoning.” With Xsigo Without Xsigo Server 1 Server 2 Server 1 Server 2 App1 App2 App3 App1 App2 App3 vswitch vswitch vsw3 vsw1 vsw2 vsw LAN LAN SAN SAN vHBA vHBA vHBA NIC HBA NIC HBA Servers both have access to storage. Security exposure. Only server running app has access to storage.

  36. Xsigo Integration with Virtual Center Visible within Virtual Center Available vNICS Available vHBAs

  37. Examples from W&W Testing

  38. Tested • Redundant configuration with 2 ESX server 3.5u1 • (2 ports/server , 2 Xsigo chassis, 2 LAN switch, 2 paths to SAN) • Binding / unbinding of profiles to ESX servers • with several vNICs and vHBAs each • without reboot • Combination Xsigo vnics with vSwitches • ESX without vmnic0 • added VLANs • ESX multipath with vHBAs • QoS limitations with vNICs and with ACLs • Seamless failover, even during VMotion • Cable pulls on LAN, SAN and chassis power

  39. Summary

More Related