31 – Multiple approaches interconnecting VXLAN Fabrics

As discussed in previous articles, VXLAN data plane encapsulation in conjunction with its control plane MP-BGP AF EVPN is becoming the foremost technology to support the modern network Fabric.

DCI is an solution architecture that you deploy to interconnected multiple data centers to extend Layer 2 and/or Layer 3 with or without multi-tenancy. A DCI architecture relies on a Data Plane for the Transport and a Control plane for an efficient and solid mechanism for endpoint discovery and distribution between sites (and much more).  VXLAN is encapsulated method, it’s not a architecture. If we want to divert VXLAN for DCI purpose, it is crucial that we understand what we need and how to address the requirement with a VXLAN transport.

Some articles from different vendors claiming that VXLAN can be simply leveraged for DCI purposes do frighten me a bit! Either they are a bit light or they don’t understand what interconnecting multiple data centers implies. You could indeed easily built a crap solution to interconnect multiple sites using VXLAN tunnels, if you just wish to extend layer 2 segments outside the data center. Or, you can build a solid DCI architecture based on business requirements using OTV or PBB EVPN or even VXLAN EVPN, however the latter implies some additional features and sophisticated configurations.

It is therefore interesting to clarify how to interconnect multiple VXLAN/EVPN fabrics geographically dispersed across different locations.

If we look at how to interconnect VXLAN-based fabrics at layer 2 and layer 3, three approaches can be considered:

  • The 1st option is the extension of multiple sites as one large single stretched VXLAN Fabric. There is no network overlay boundary, nor VLAN hand-off at the interconnection, which simplifies operations. This option is also known as geographically dispersed VXLAN Multipod. However we should not consider this solution as a DCI solution per se as there is no demarcation, nor separation between locations. Nonetheless this one is very interesting for its simplicity and flexibility. Consequently we have deeply tested and validated this design (see next article).
  • The second option to consider is multiple VXLAN/EVPN-based Fabrics interconnected using a DCI Layer 2 and layer 3 extension. Each greenfield Data Center located on different site is deployed as an independent fabric, increasing autonomy of each site and enforcing global resiliency. This is also called Multisite in opposition to the previous one Multipod. As a consequence, a efficient Data Center Interconnect technology (OTV, VPLS, PBB-EVPN, or even VXLAN/EVPN) is used to extend Layer 2 and Layer 3 connectivity across separate sites.VXLAN EVPN Multisites 1For the VXLAN multisite scenario, there are 2 possible models that need to be taken into reflection:

Model 1: Each VXLAN/EVPN is a Layer 2 fabric only, with external devices (Routers, FW, SLB) offering routing and default gateways functions (with or without FHRP localization). This one is similar to the traditional multiple DC to interconnect and doesn’t constrain any specific requirements, except the DCI solution itself to interconnect the fabrics in a resilient and secure fashion, as usual.

VXLAN EVPN Multisites with external gateway devices

Model 2: Each VXLAN/EVPN is Layer 2 and Layer 3 fabric.  VXLAN EVPN indeed brings a great added-value with L3 Anycast Gateway as discussed in this post. Consequently most of enterprises could be interested to leverage this function distributed across all sites with the same virtual IP and virtual MAC addresses . This crucial feature, when geographically dispersed among multiple VXLAN fabrics, improves performances and efficiency with the default gateway being active transparently for the endpoints located on each site from their first hop router (leaf node where endpoints are directly attached to). It offers hot live mobility with E-W traffic optimization beside the traditional zero business interruption. However because of the same virtual MAC and same virtual IP address exist on both side, some tricky configurations must be achieved.

.VXLAN EVPN Multisites with anycast L3 gateway

  • A third scenario, which can be considered somehow as a subset of the stretch fabric, is to dedicate the VXLAN/EVPN overlay to extend Layer 2 segments from one DC to another. Consequently, VXLAN/EVPN is agnostic of the Layer 2 transport deployed within each location (it could be vPC, FabricPath, VXLAN, ACI, just to list few of them). VLANs requiring to be extended to outside the DC are prolonged up to the local DCI devices from where layer 2 frames  are encapsulated with a Layer 3 VXLAN header, and sent afterward toward the remote site over a layer 3 network, to be finally de-encapsulated on the remote DCI device and distributed within the DC accordingly. VXLAN EVPN DCI-onlyIn this scenario vPC is also leveraged to offer DCI dual-homing functions for resiliency and traffic load distribution.

This option can be analogous to OTV or PBB-EVPN from an overlay point of view. However as already mentioned, it must be understood that VXLAN has not been built natively to address all the DCI requirements. Nevertheless the implementation of EVPN control-plane could be expanded in the future to include the multi-homing functionality, delivering failure containment, loop protection, site-awareness, or optimized multicast replication like OTV can already offer since its beginning.

BTW it’s always interesting to highlight that since NX-OS 7.2(0)D1(1), OTV relies on VXLAN encapsulation for the data-plane transport as well as the well known ISIS as the control plane natively built for DCI purposes. Hum, interesting to notice that OTV can also be a VXLAN-based DCI solution, however it implements the right control plane for DCI needs. That’s why it is important to understand that not only a control plane is required for Intra-fabric and Inter-DC network transport, but this control plane must natively offer the features to address the DCI requirements.

 

 

 

 

This entry was posted in DCI. Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.