1 / 18

Network Virtualization Overlay Control Protocol Requirements

Network Virtualization Overlay Control Protocol Requirements. draft-kreeger-nvo3-overlay-cp-00 Lawrence Kreeger, Dinesh Dutt , Thomas Narten , David Black, Murari Sridharan. Purpose.

keiki
Download Presentation

Network Virtualization Overlay Control Protocol Requirements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00 Lawrence Kreeger, DineshDutt, Thomas Narten, David Black, MurariSridharan

  2. Purpose Outline the high level requirements for control protocols needed for overlay virtual networks in highly virtualized data centers.

  3. Basic Reference Diagram TES “Inner” Addresses VN Identifier Underlying Network (UN) NVE UN “Outer” Addresses NVE NVE Payload VN TES TES TES TES TES TES TES TES TES Payload VN NVE NVE Virtual Networks (VNs) (aka Overlay Networks) Network Virtualization Edge (NVE) – (OBP in draft) Tenant End System (TES) – (End Station in draft)

  4. Possible NVE / TES Scenarios Hypervisor Network Services Appliance NVE Virtual Switch NVE Service 1 VM 1 VM 2 VM 3 VM 4 Service 2 Underlying Network Hypervisor Access Switch Network Services Appliance Access Switch NVE NVE VLAN Trunk Virtual Switch Service 3 VLAN Trunk Service 4 Locally Significant Locally Significant Access Switch Physical Servers NVE Server 1 Server 1

  5. Dynamic State Information Needed by an NVE • Tenant End System (TES) inner address (scoped by Virtual Network (VN)) to outer (Underlying Network (UN)) address of the other Network Virtualization Edge (NVE) used to reach the TES inner address. • For each VN active on an NVE, a list of UN multicast addresses and/or unicast addresses used to send VN broadcast/multicast packets to other NVEs forwarding to TES for the VN. • For a given VN, the Virtual Network ID (VN-ID) to use in packets sent across the UN. • If the TES is not within the same device as the NVE, the NVE needs to know the physical port to reach a given inner address. • If multiple VNs are reachable over the same physical port, some kind of tag (e.g. VLAN tag) is needed to keep the VN traffic separated over the wire.

  6. Two Main Categories of Control Planes • For an NVE to obtain dynamic state for communicating with a TES located on a different physical device (e.g. hypervisor or Network Services Appliance). • For an NVE to obtain dynamic state for communicating across the Underlying Network to other NVEs.

  7. Control Plane Category Reference Diagram Category 2 Control Plane Category 2 Control Plane Category 1 Control Plane Central Entity? VM 1 VM 2 ? Peer to Peer? Network Services Appliance Hypervisor Underlying Network Access Switch NVE NVE Virtual Switch Service 3 VLAN Trunk Service 4 Locally Significant

  8. Category 2 CP Architecture Possibilities • Central entity is populated by DC orchestration system • Central entity is populated by Push from NVE • Push to NVE from central entity • Pull from NVE from central entity • Peer to Peer exchange between NVEs with no central entity • Central entity could be a monolithic system or a distributed system

  9. Possible Example CP Scenario This example is not part of the Req draft and is shown for illustrative purposes Assumes: Central entity with push/pull from NVE, Multicast Enabled IP Underlay NVE State Access Switch A1, NVE IP = IP-A1 Hypervisor H1 NVE Virtual Switch Port 10 Access Switch A2, NVE IP = IP-A2 Hypervisor H2 NVE Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  10. VM 1 comes up on Hypervisor H1, connected the VN “Red” H1’s Virtual Switch signals to A1 that it needs attachment to VN “Red” NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 Hypervisor H1 Req VN-ID and Mcast Group for VN = “Red” NVE Attach: VN = “Red” VM 1 Virtual Switch MAC=M1 VN-ID = 10000 Mcast = 224.1.2.3 Port 10 Local VLAN Tag = 100 IGMP Join 224.1.2.3 Access Switch A2, NVE IP = IP-A2 Hypervisor H2 NVE Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  11. VM 1 comes up on Hypervisor H1, connected the VN “Red” H1’s Virtual Switch signals to A1 that MAC M1 is connected to VN “Red” NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 Hypervisor H1 Register MAC = M1 in VN “Red” reachable at IP-A1 Attach: MAC = M1 in VN “Red” NVE VM 1 Virtual Switch MAC=M1 Port 10 Access Switch A2, NVE IP = IP-A2 Hypervisor H2 NVE Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  12. VM 2 comes up on Hypervisor H1, connected the VN “Red” H1’s Virtual Switch signals to A1 that MAC M2 is connected to VN “Red” NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 Hypervisor H1 Register MAC = M2 in VN “Red” reachable at IP-A1 Attach: MAC = M2 in VN “Red” NVE VM 1 VM 2 Virtual Switch MAC=M1 Port 10 MAC=M2 Access Switch A2, NVE IP = IP-A2 Hypervisor H2 NVE Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  13. VM 3 comes up on Hypervisor H2, connected the VN “Red” • H2’s Virtual Switch signals to A2 that it needs attachment to VN “Red” NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 Hypervisor H1 NVE VM 1 VM 3 VM 2 Virtual Switch MAC=M1 Port 10 MAC=M2 Access Switch A2, NVE IP = IP-A2 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 Hypervisor H2 Req VN-ID and Mcast Group for VN = “Red” NVE Attach: VN = “Red” Virtual Switch MAC=M3 VN-ID = 10000 Mcast = 224.1.2.3 Port 20 Local VLAN Tag = 200 IGMP Join 224.1.2.3 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  14. VM 3 comes up on Hypervisor H2, connected the VN “Red” • H2’s Virtual Switch signals to A2 that MAC M3 is connected to VN “Red” NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 Hypervisor H1 NVE VM 2 VM 3 VM 1 Virtual Switch MAC=M1 Port 10 MAC=M2 Access Switch A2, NVE IP = IP-A2 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 Hypervisor H2 Register MAC = M3 in VN “Red” reachable at IP-A2 NVE Attach: MAC = M3 in VN “Red” Virtual Switch MAC=M3 Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  15. VM 3 ARPs for VM1 • NVE A2 uses multicast to send the ARP Bcast to all NVEs interested in VN “Red” • NVE A1 Queries to find inner to outer mapping for MAC M3 NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 Hypervisor H1 ARP Encapsulated with VN-ID 10000, sent to Group 224.1.2.3 ARP ARP tagged with VLAN 100 NVE VM 1 VM 3 VM 2 Virtual Switch ARP Port 10 Multicast by Underlying Network Access Switch A2, NVE IP = IP-A2 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 Hypervisor H2 ARP Encapsulated with VN-ID 10000, sent to Group 224.1.2.3 ARP NVE ARP tagged with VLAN200 Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  16. VM 1 Sends ARP Response to VM3 • NVE A1 Queries central entity to find inner to outer mapping for MAC M3 • NVE A1 Unicasts ARP Response to A2 NVE State Access Switch A1, NVE IP = IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 MAC = M3 in “Red” on NVE IP-A2 Hypervisor H1 ARP Resp Query for outer address for MAC M3 in “Red” ARP Resp tagged with VLAN 100 NVE VM 1 VM 3 VM 2 Virtual Switch Response: Use IP-A2 Port 10 ARP Resp Encapsulated with VN-ID 10000, sent to IP-A2 Unicast by Underlying Network Access Switch A2, NVE IP = IP-A2 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 Hypervisor H2 ARP Resp Encapsulated with VN-ID 10000, sent to IP-A2 ARP Resp NVE ARP Resp tagged with VLAN200 Virtual Switch Port 20 Access Switch A3, NVE IP = IP-A3 Hypervisor H3 NVE Virtual Switch Port 30

  17. Summary of CP Characteristics • Lightweight for NVE • This means: • Low amount of state (only what is needed at the time) • Low on complexity (keep it simply) • Low on overhead (don’t drain resources from NVE) • Highly Scalable (don’t collapse when scaled) • Extensible • Support multiple address families (e.g. IPv4 and IPv6) • Allow addition of new address families • Quickly reactive to change • Support Live Migration of VMs

  18. Conclusion • Two Categories of Control Plane protocols are needed to support a dynamic virtualized data center to dynamically build the state needed by an NVE to perform its map+encap and decap+deliver function. • There are several models of operation possible which the WG will need to decide on. • To help in deciding, the draft contains important evaluation criteria to use for comparing proposed solutions.

More Related