450 likes | 679 Views
Hyper-V Networking. Symon Perriman Jeff Woolsey Technical Evangelist Principal Program Manager. Introduction to Hyper-V Jump Start. Agenda. Virtual networks Software Defined Networking Hyper-V Extensible Switch Network teaming Guest Network Load Balancing. Virtual Networks.
E N D
Hyper-V Networking Symon Perriman Jeff Woolsey Technical Evangelist Principal Program Manager
Agenda • Virtual networks • Software Defined Networking • Hyper-V Extensible Switch • Network teaming • Guest Network Load Balancing
Virtual Switch Architecture Implemented as an NDIS 6.0 MUX driver Binds to network adapters as a protocol driver Can enumerate a single-host interface Basic layer-2 switch functionality Dynamically “learns” port to MAC mappings Implements VLANs Does not implement spanning trees Does not implement layer 3
Configuring Virtual Networks Configured from Virtual Switch Manager External networks VMs can communicate with other computers on the network Only 1 per physical NIC Internal networks VMs can communicate with only other VMs on the same host, and with the host computer Private networks VMs can communicate only with other VMs on the same host
Virtual Network Adapters Synthetic Adapters Not based on a physical device Doesn’t support PXE boot Significantly higher performance vs. emulated Drivers provided for supported operating systems Windows Server 2012 extensible switch Legacy (Emulated) Adapters Emulates a physical DEC21140 chipset Supports PXE boot Drivers exist for most operating systems • Windows Server 2003 SP2 • Windows Server 2008 • Windows Server 2008 R2 • Windows Server 2012 • Linux (SLES 10, 11) • RHEL 5.x/6.x • CentOS 5.x/6.x • Windows XP • Windows Vista • Windows 7 • Windows 8 • OpenSUSE • Etc.
Network ConsiderationsCustomers • How do I ensure network multi-tenancy? • IP Address Management is a pain. • What if VMs are competing for bandwidth? • Fully Leverage Network Fabric • How do I integrate with existing fabric? • Network Metering? • Can I dedicate a NIC to a workload?
Windows Server 2012 is optimized for Hybrid Clouds to host multi-tenant workloads Hybrid Clouds Tenant 1: Multiple VM Workloads Tenant 2: Multiple VM Workloads Data Center
Even when hardware fails … … customers want continuous availability Reliability Tenant 1: Multiple VM Workloads TEAMING Tenant 2: Multiple VM Workloads Data Center
Even when multiple VMs are competing for bandwidth … … customers want predictability Predictability 25 15 Tenant 1: Multiple VM Workloads Tenant 2: Multiple VM Workloads Data Center $$ $$$$
In a multi-tenant environment … … customers want security and isolation Security Tenant 1: Multiple VM Workloads Tenant 2: Multiple VM Workloads Data Center
Multi-Tenant Network Requirements • Tenant wants to easily move VMs to/from the cloud • Hoster wants to place VMs anywhere in the data center • Both want: Easy Onboarding, Flexibility & Isolation Cloud Data Center Woodgrove Bank Blue 10.1.0.0/16 Contoso Bank Red 10.1.0.0/16
One Solution: PVLAN • Isolation Scenario • Hoster wants to isolate all VMs from each other and allow internet connectivity • #1 Customer Ask from hosters • Community Scenario • Hoster wants tenant VMs to interact with each other but not with other tenant VMs • Requires a VLAN id for each “community” (limited scalability, only 4095 VLAN IDs) u Blue10.1.1.21 Green 10.1.1.31 Red1 10.1.1.11 Red2 10.1.1.12 Hyper-V Switch Isolated 4, 7 Community 4, 9 Community 4, 9 Isolated 4, 7 Win 8 Host To Internet (10.1.1.1)
Software Defined Networking (SDN) An SDN solution can accomplish several things Create virtual networks that run on top of the physical network Control traffic flow within the datacenter Create integrated policies that span the physical and virtual networks On a per-VM basis, configure security policies that limit the types of traffic (and destinations)
SDN: Network Virtualization Woodgrove network Contoso network Woodgrove VM Contoso VM Physical server Physical network • Hyper-V Network Virtualization • Run multiple virtual networks on a physical network • Each virtual network has illusion it is running as a physical fabric • Hyper-V Machine Virtualization • Run multiple virtual servers on a physical server • Each VM has illusion it is running as a physical server
Software Defined Networking (SDN) How network virtualization works Two IP addresses for each virtual machine General Routing Encapsulation (GRE) IP address rewrite Policy management server Problems solved Removes VLAN constraints Eliminates hierarchical IP address assignment for virtual machines On a per-VM basis, configure security policies that limit the types of traffic (and destinations)
Generic Routing Encapsulation (GRE) How GRE works Defined by RFC 2784 and 2890 One customer address per virtual machine One provider address per host Tenant network ID MAC header Benefits Lowers burden on switches Allows traffic analysis, metering and control Enable Live Migration across subnets
Customers want specialized functionality with lots of choice … … for firewalls, monitoring and physical fabric integration Extensibility Tenant 1: Multiple VM Workloads Tenant 2: Multiple VM Workloads Data Center
Hyper-V Extensible Switch The Hyper-V Extensible Switch allows a deeper integration with customers’ existing network infrastructure, monitoring, and security tools PVLANS DHCP Guard Protection • Windows PowerShell & WMI Management ARP/ND Poisoning Protection Virtual Port ACLs Trunk Modeto Virtual Machines Monitoring & Port Mirroring
Hyper-V Extensible Switch Root Partition • Forwarding extensions direct traffic, defining the destination(s) of each packet • Forwarding extensions can capture and filter traffic • Examples: • Cisco Nexus 1000V and UCS • NEC ProgrammableFlow'svPFSOpenFlow • Windows Filter Platform (WFP) Extensions can inspect, drop, modify, and insert packets using WFP APIs • Windows Antivirus and Firewall software uses WFP for traffic filtering • Example: Virtual Firewall by 5NINE Software • Capture extensions can inspect traffic and generate new traffic for report purposes • Capture extensions do not modify existing Extensible Switch traffic • Example: sflow by inMon VM1 VM2 Firewall BFE Service Callout Filtering Engine Extensible Switch Extension Protocol Capture Extensions (NDIS) VM NIC VM NIC Physical NIC Host NIC Windows Filter Platform (WFP) Forwarding Extensions Forwarding Extensions (NDIS) Extension Miniport
Feature Rich Networking in the Box • Open, Extensible Virtual Switch • Nexus 1000 Support • Openflow Support • Network Introspection • Much more… • Advanced Networking • ACLs • PVLAN • …much more… • Windows NIC Teaming • Network QoS • Per VNIC bandwidth reservation & limits • Network Metering • DVMQ • SR-IOV Network Support • Reduce Latency & CPU Utilization • Supports Live Migration
Single-Root I/O Virtualization (SR-IOV) • Reduces latency of network path • Reduces CPU utilization for processing network traffic • Increases throughput • Direct device assignment to virtual machines without compromising flexibility • Supports Live Migration Root Partition Virtual Machine Hyper-V Switch Routing VLAN Filtering Data Copy VMBUS Virtual Function Physical NIC Virtual NIC SR-IOV Physical NIC Network I/O path with SR-IOV Network I/O path without SR-IOV
SR-IOV Enabling & Live Migration Turn On IOV Live Migration Post Migration • Enable IOV (VM NIC Property) • Break Team • Reassign Virtual Function • Assuming resources are available • Virtual Function is “Assigned” • Remove VF from VM • Team automatically created • Migrate as normal Virtual Machine • Traffic flows through VF Software Switch (IOV Mode) Software Switch (IOV Mode) Network Stack • Software path is not used SR-IOV Physical NIC Physical NIC SR-IOV Physical NIC “TEAM” “TEAM” Virtual Function Virtual Function Software NIC Software NIC VM has connectivity even if • Switch not in IOV mode • IOV physical NIC not present • Different NIC vendor • Different NIC firmware
DVMQ vs. SR-IOV Considerations • DVMQ Pros: • Improves VM Performance • Provides Receive Side Scaling benefits by spreading network load across multiple logical processors • Can use the Hyper-V Extensible Switch • DVMQ Cons: • If you need greater than 10 Gb/E for a workload, SR-IOV is likely the better choice • SR-IOV Pros: • Great performance • Great for low latency workloads • SR-IOV Cons: • Bypasses the virtual switch
Cloud Admins Want Scale, Customers PerfDVMQ, IPsec Task Offload, SR-IOV IPsec Task Offload: Microsoft expects deployment of Internet Protocol security (IPsec) to increase significantly in the coming years. The large demands placed on the CPU by the IPsec integrity and encryption algorithms can reduce the performance of your network connections. IPsec Task Offload is a technology built into the Windows operating system that moves this workload from the main computer's CPU to a dedicated processor on the network adapter. SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. The SR-IOV specification was created and is maintained by the PCI SIG, with the idea that a standard specification will help promote interoperability. SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs). Physical functions (PFs) are full-featured PCIe functions; virtual functions (VFs) are “lightweight” functions that lack configuration resources. Dynamic Virtual Machine Queue (VMQ)dVMQuses hardware packet filtering to deliver packet data from an external virtual machine network directly to virtual machines, which reduces the overhead of routing packets and copying them from the management operating system to the virtual machine.
Advanced Network SecurityDHCP Guard, Router Guard, Monitor Port • DHCP Guard is a security feature that drops DHCP server messages from unauthorized virtual machines pretending to be DHCP servers. • Router Guard is a security feature that drops Router Advertisement and Redirection messages from unauthorized virtual machines pretending to be routers. • Monitor Mode duplicates all egress and ingress traffic to/from one or more switch ports (being monitored) to another switch port (performing monitoring)
Manage to a Service Level AgreementNetwork Bandwidth & QoS • Bandwidth Management allows you to easily reserve minimum or set maximums to provide QoS controls to manage to a service level agreement
Port Mirroring Provided by the Hyper-V Extensible switch Administrator can run security and diagnostics applications in virtual machines that can monitor virtual machine network traffic Port mirroring also supports live migration of extension configurations Set-VMNetworkAdapter –VMNameMyVM –PortMirroring Source
Windows Server 2012 Network Teaming Failover teaming Typically two interfaces Typically connected to different switches Provides redundancy for NIC card, cable, or switch failure Aggregation/load balancing teams Two or more interfaces Divides network traffic between active interfaces by MAC/IP address or protocol Redundancy for NIC card or cable failure Microsoft Supported
Port ACL A rule that you can apply to a Hyper-V switch port Can allow or deny packets Inbound or outbound control ACLs have three elements with the following structure Local or Remote Address Direction Action Add-VMNetworkAdapterAcl
PVLANS PVLAN addresses some of the scalability issues of VLANs Set as a switch port property PVLAN has two VLAN IDs: a primary VLAN ID and a secondary VLAN ID PVLAN may be in one of three modes Isolated Promiscuous Community Set-VMNetworkAdapterVlan
Trunk Mode Hyper-V Virtual Switch provides support for VLAN Trunk mode Provides network services on a virtual machine with the ability to see traffic from multiple VLANS The switch port receives traffic from all VLANs are in an allowed VLAN list Set-VMNetworkAdapterVlan
Networking Performance The Hyper-V Extensible Switch takes advantage of hardware innovation to drive the highest levels of networking performance within virtual machines DynamicVMq Dynamically span multiple CPUs when processingvirtual machine network traffic IPsec Task Offload Offload IPsec processing from within virtual machine,to physical network adaptor, enhancing performance SR-IOV Support Map virtual function of an SR-IOV-capable physical network adaptor, directly to a virtual machine
VMs Using Network Load Balancing To configure VMs in a Network Load Balancing (NLB) cluster, enable MAC address spoofing This ensures the virtual switch will not learn MAC addresses, a requirement for NLB to function correctly VMQ does not work with NLB NLB changes the virtual MAC addresses which prevents Hyper-V from dispatching the packets directly to the guest’s queue
Windows Server 2012 Networking: It’s All ThereFeature rich, extensible, in the box, no compromises
Takeaways Hyper-V is fully integrated in the Windows network stack Use the synthetic network adapter Use VLAN tagging & firewall rules for security Windows Server 2012 includes inbox NIC Teaming for load balancing and failover VMQ provides great performance for most workloads SR-IOV for low latency, high throughput workloads