36 – VXLAN EVPN Multi-Fabrics – Host Mobility (part 4)

Host Mobility across Fabrics

This section discusses support for host mobility when a distributed Layer 3 Anycast gateway is configured across multiple VXLAN EVPN fabrics.

In this scenario, VM1 belonging to VLAN 100 (subnet_100) is hosted by H2 in fabric 1, and VM2 on VLAN 200 (subnet_200) initially is hosted by H3 in the same fabric 1. Destination IP subnet_100 and subnet_200 are locally configured on leaf nodes L12 and L13 as well as on L14 and L15.

This example assumes that the virtual machines (endpoints) have been previously discovered, and that Layer 2 and 3 reachability information has been announced across both sites as discussed in the previous sections.

Figure 1 highlights the content of the forwarding tables on different leaf nodes in both fabrics before virtual machine VM2 is migrated to fabric 2.

Figure 1: Content of Forwarding Tables Before Host Migration

Figure 1: Content of Forwarding Tables Before Host Migration

The following steps show the process for maintaining communication between the virtual machines in a host mobility scenario, as depicted in Figure 2

Figure 2: VXLAN EVPN Multifabric and Host Mobility

Figure 2: VXLAN EVPN Multifabric and Host Mobility

  1. For operational purposes, virtual machine VM2 moves to host H4 located in fabric 2 and connected to leaf nodes L21 and L22.
  2. After the migration process is completed, assuming that VMware ESXi is the hypervisor used, the virtual switch generates a RARP frame with VM2’s MAC address information.

Note: With other hypervisors, such as Microsoft Hyper-V or Citrix Xen, a GARP request is sent instead, which includes the source IP address of the sender in the payload. As a consequence, the procedure will be slightly different than the one described here.

  1. Leaf L22 in this example receives the RARP frame and learns the MAC address of VM2 as locally connected. Because the RARP message can’t be used to learn VM2’s IP address, the forwarding table of the devices in fabric 2 still points to border nodes BL3 and BL4 (that is, VM2’s IP address is still known as connected to fabric 1). Leaf L22 also sends an MP-BGP EVPN route-type-2 update in fabric 2 with VM2’s MAC address information. When doing so, it increases the sequence number associated with this specific entry and specifies as the next hop the anycast VTEP address of leaf nodes L21 and L22. The receiving devices update their forwarding tables with this new information.
  2. On the data plane, the RARP broadcast frame is also flooded in fabric 2 and reaches border nodes BL3 and BL4, which forward it to the local OTV devices.
  3. The OTV AED in fabric 2 forwards the RARP frame across the Layer 2 DCI overlay network to reach the remote OTV devices in fabric 1. The OTV AED device in fabric 1 forwards the frame to the local border nodes.
  4. Border nodes BL1 and BL2 learn the MAC address of VM2 from the reception of the RARP frame as locally attached to their Layer 2 interfaces connecting to the OTV AED device. As a consequence, one of the border nodes advertises VM2’s MAC address information in fabric 1 with a route-type-2 BGP update using a new sequence number (higher than the previous number).
  5. The forwarding tables for all relevant local leaf nodes in fabric 1 are updated with the information that VM2’s MAC address is now reachable through the anycast VTEP address of border nodes BL1 and BL2.

At this point, all the devices in fabrics 1 and 2 have properly updated their forwarding tables with the new VM2’s MAC address reachability information. This process implies that intrasubnet communication to VM2 is now fully reestablished. However, VM2’s IP address still is known in both fabrics as connected to the old location (that is, to leaf nodes L14 and L15 in fabric 1), so communications still cannot be routed to VM2. Figure 3 shows the additional steps required to update the forwarding tables of the devices in fabrics 1 and 2 with the new reachability information for VM2’S IP address.

Figure 3: Propagation of VM2’s Reachability Information toward Fabric 1

Figure 3: Propagation of VM2’s Reachability Information toward Fabric 1

8.   The reception of the route-type-2 MAC address advertisement on leaf nodes L14 and L15 triggers a verification process to help ensure that VM2 is not locally connected anymore. Note that ARP requests to VM2 are locally sent out the local interface to which VM2 was originally connected as well as to fabric 1 and subsequently to fabric 2 through the Layer 2 DCI connection. The ARP request reaches VM2, which responds, allowing leaf nodes L21 and L22 to update the local ARP table and trigger the consequent control-plane updates discussed previously and shown in Figure 16.

9.   After verification that VM2 has indeed moved away from leaf nodes L14 and L15, one of the leaf nodes withdraws VM2’s IP reachability information from local fabric 1, sending an MP-BGP EVPN update. This procedure helps ensure that this information can be cleared from the forwarding tables of all the devices in fabric 1.

10.  Because border nodes BL1 and BL2 also receive the withdrawal of VM2’s IP address, they update the border nodes in the remote fabric to indicate that this information is not reachable anymore through the Layer 3 DCI connection.

11.  As a consequence, border nodes BL3 and BL4 also withdraw this information from remote VXLAN EVPN fabric 2, allowing all the local devices to clear this information from their tables.

The end result is the proper update of VM2’s IP address information in the forwarding tables of all the nodes in both fabrics, as shown in Figure 4.

Figure 4: End State of the Forwarding Tables for Nodes in Fabrics 1 and 2

Figure 4: End State of the Forwarding Tables for Nodes in Fabrics 1 and 2

At this point, Layer 2 and 3 communication with VM2 can be fully reestablished.

 

This entry was posted in DCI. Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.