34 – VXLAN EVPN Q-in-VNI and EFP for Hosting Providers

Dear Network and DCI Experts !

While this post is a little bit out of the DCI focus, and assuming many of you already know Q-in-Q, the question is, are you yet familiar with Q-in-VNI? For those who are not, I think this topic is a good opportunity to bring Q-in-VNI deployment with VXLAN EVPN for intra- and inter-VXLAN-based Fabrics and understand its added value and how one or multiple Client VLANs from the double encapsulation can be selected for further actions.

Although it’s not an unavoidable rule per-se, some readers already using Dot1Q tunneling may not necessarily fit into the following use-case. Nonetheless, I think it’s safe to say that most of Q-in-Q deployment have been used by Hosting Provider for co-location requirements for multiple Clients, hence, the choice of the Hosting Provider use-case elaborated in this article.

Nonetheless, what is elaborated in the following sections can address as well other requirement, to list just one more. For example, after the acquisition of one or multiple enterprises or other business organisations, merging multiple DC together into a single infrastructure, in order to reduce OPEX. This is another form of multi-tenancy that requires deploying the same level of Layer 2 and Layer 3 segmentation.

For many years now, Hosting Services have physically sheltered thousands of independent clients’ infrastructures within the Provider’s Data Centers. The Hosting Service Provider is responsible for supporting each and every Client’s data network in its shared network infrastructure. This must be achieved without changing any Tenant’s Layer 2 or Layer 3 parameter, and must also be done as quickly as possible. For many years, the co-location business for independent Tenants has been accomplished by using a dedicated physical Provider network for each Client’s infrastructure, which has been costly in terms of network equipment, operational deployment and maintenance, rack space, power consumption, and this method has also been rigid, with limited scalability for growth.

With the evolution of this business and virtualization, there is a strong need to offer additional services at lower cost with more agility for customers, and the requirements for this solution can be summarized as follows:

  • To seamlessly append any new Client network to the shared infrastructure owned by the Hosting Service Provider with the same requirement mentioned previously in terms of Layer 2 and Layer 3 segmentation.
  • To support multiple Clients sharing the same Top-of-Rack switches.
  • The above implies that the shared Provider’s infrastructure must be able to support duplicate Layer 2 VLAN identifiers across all the Tenants, including reuse of the same Layer 3 address space by all of them.
  • To provide the flexibility of transparently spreading each Tenant‘s infrastructure throughout the Hosting Provider’s organization.
  • To allow for the seamless growth and evolution of each Tenant’s network, for the Client business as well as for the Service Provider production.
  • To offer new Service Clouds for each Client.
Figure-1 hosting service provider with service cloud

Figure 1: hosting service provider with service cloud

Figure 1 shows a logical view of different Client infrastructures spread across the Provider’s Network infrastructure with an extension of their respective network segments between local and geographically dispersed Pods or data centers. In addition to traditionally maintaining the Client network segmentation from end-to-end, each Tenant must be able to access both the private and global Service Cloud.

Essentially, besides managing their Tenant’s data network, Hosting Providers should be able to develop their business by proposing connectivity to a new and enhanced model of Service Cloud, to each Tenant. The Service Cloud provides new and exciting service tools for the hosted Clients such as a broad range of XaaS as well as “Network and Security as a Service” (NSaaS) for the Clients’ applications. However, the key technical challenge for the Hosting Provider is to be able to select one or multiple “private” segments from any isolated Client infrastructure in order to provide access to a dedicated private or shared public Service Cloud.

Note: “Network and Security as a Service”: This Cloud Service provides Application Security, Deep Inspection, Optimization, Offloading, and Analytics Services, just to list a few, for each and every client‘s set of multi-tier applications.

Hosting Service Requirements & Solutions

The shared Data Center Network infrastructure from the Hosting Provider must support VLAN overlapping as well as duplicated Layer 3 networks for any Client organization.

To address this requirement, as done for many years, the historical “double VLAN tag” encapsulation method is leveraged in conjunction with VXLAN EVPN as a Layer 3 transport. This Dot1Q tunnel encapsulation for the Layer 2 segmentation is named Q-in-VNI. Besides the Layer 2 overlay network transport, VXLAN EVPN is used for the multi-tenancy Layer 3 segmentation.

The ingress interface connecting the Client network is configured in access mode with a Backbone VLAN (B-VLAN) that encapsulates the original Client VLAN (C-VLAN). B-VLANs belong to the Provider network’s resources and are consumed accordingly. Q-in-VNI maps each Backbone VLAN to a L2 VNI transport which contains all the Client VLAN’s (C-VLAN) carried with the inner Dot1Q. As a consequence, the original C-VLAN’s can be retrieved at the egress VTEP and this transparently provides the extension of all Client Networks across different Top-of-Racks.

Figure-2 Q-in-VNI to transport client-vlans from end-to-end

Figure 2: Q-in-VNI to transport client-vlans from end-to-end

As a consequence, the VXLAN Fabric is agnostic regarding the original Client VLAN’s. There is no need to configure the “private” C-VLAN’s as such. Only the B-VLANs mapped to a VXLAN tunnel network need to be created as shown in the sample configuration below.

Figure-3 Q-in-VNI sample configuration with vlan 1001 as the dot1q-tunnel vlan

Figure 3: Q-in-VNI sample configuration with vlan 1001 as the dot1q-tunnel vlan

 

VXLAN EVPN Fabric Multipod

Because all the “private” VLAN’s for each and every Client must be carried from end-to-end throughout the shared network connectivity for multiple sites’ deployment, one option is to maintain the double tag encapsulation across different and distant locations over the Layer 3 network, without any hand-off between the VLAN’s.

Consequently, the first choice for network transport relies on VXLAN EVPN Multipod discussed in post 32

The VXLAN EVPN Multipod architecture pillars can be expressed as follows:

  • Encapsulation:
    • Maintains the Tenant encapsulation from Pod-to-Pod and from site-to-site.
    • No requirement for VLAN hand-off at any site boundary, which will break the Dot1Q tunneling (Figure 4).
  • Sturdiness
    • Validated for geographically dispersed Pods.
    • Offers a solid Layer 3-based underlay Fabric, including inter-site transport.
    • Contains the failure domain by reducing the amount of flooding frames across the whole Fabric (no Flood & Learn, ARP Suppress).
    • Efficient Endpoint learning with MP-BGP EVPN – Client host’s (MAC) reachability information is discovered and distributed among all Leaf Nodes of interest by the Control Plane.
    • Maintains Layer 2 and Layer 3 segmentation for each Tenant from end-to-end.
  • Flexibility
    • Multiple Client infrastructures share the same Top-of-Rack switch.
    • Mix of Q-in-Q transport, Dot1Q trunk and Access port within the same ToR switch. From each Top-of-Rack switch on to which client infrastructures are locally attached, multiple interfaces per Tenant organization can use different encapsulation types (Q-in-Q, Dot1Q) and modes (Trunk, Access, Routed).
    • Per Port VLAN translation for native Dot1Q client-VLAN.
    • Independent Client Layer 2 and Layer 3 network infrastructures (Physical, Virtual, Hybrid).
  • Scalability
    • Offers very high scalability, which is required by Service Providers.
    • Reuses duplicate VLAN IDs across all Tenants throughout the whole Fabric.
    • Transit inter-site devices are pure Layer 3 routers that do not require any Layer 2 encapsulation capabilities. Thus they do not dictate the amount of L2 encapsulated frames and tags that can be stretched across the remote sites (note, MTU must be increased accordingly).
    • Transit inter-site devices can also be leveraged to extend or support additional Tenants (Bud node support).
Figure 4: Q-inVNI with VXLAN EVPN Multipod geographically dispersed

Figure 4: Q-in-VNI with VXLAN EVPN Multipod geographically dispersed

Figure 4 illustrates the requirement of Layer 2 segmentation and VLAN ID overlapping between Clients, across different Leaf nodes spread over multiple locations. The transit transport between Pods being pure Layer 3, the double encapsulation is performed at the edge of each Leaf Node.

Service Cloud Integration

As mentioned above, it is critical that the Hosting Provider offers other Service Clouds to its Clients. Although each Client network infrastructure keeps fully isolated, access to the Service Cloud for each Tenant can be achieved using two different approaches.

Selective Client Service C-VLAN’s from Ingress Access Interface

The first option is an elementary method to provide external access so that each individual Client infrastructure can benefit from the Provider Services. As explained previously, in order to maintain the scalability and segmentation of each and every Client network, the key transport relies on Q-in-VNI (a Per-Client Q-in-Q encapsulation mapped to a L2 VNI). As a result, due to the double tagging induced with the Dot1Q tunneling, as of today, Client VLAN’s cannot be natively selected for any further treatment such as routing, bridging or for security purposes. As described in Figure 2 and Figure 4, Client-VLAN’s are fully isolated from the rest of the Provider’s infrastructure, which is, above all, the foremost expectation from each Client. As a consequence, for that particular requirement, each Tenant’s infrastructure consumes at least one physical interface per Top-of-Rack in order to be expanded to other racks or other Pods using Q-in-VNI.

Additionally, to offer access to the Service Cloud, one or multiple C-VLANs for each individual private Client network must be selected from a different source Interface, where the latter is locally connected. Consequently, a second physical access interface is allocated on the Top-of-Rack for one or more selective C-VLANs to be routed outside the Client’s organization.

Figure 5: separated-interface-for-hosting-transport-and-services

Figure 5: Separated Interface for Hosting transport and Services

Figure 5 depicts an extreme scenario in which, firstly each Client infrastructure uses the full range of isolated Dot1Q VLANs, and secondly Client Blue and Client Orange have elected the same VLAN 10 to access a Service Cloud

Note: Let me call these Client-VLANs which are used to access the Service Cloud, “Client Service C-VLANs” or “public C-VLANs”.

To keep it simple with this example, for each Client:

  • A first L2 VNI is initiated to transport the double-tagged frames (Q-in-VNI), to carry all the “private” L2 network segments of a particular Client within the VLAN Fabric.
  • Another L2 VNI is leveraged to transport the “Client Service VLAN” (public network) used to access one of the Provider L3 services.
  • Finally, a L3 VNI is used to maintain the Layer 3 segmentation for that particular Client among the other Tenants.

Note that it is also possible to transport multiple “Client Service VLANs” with their respective L2 VNIs (1:1) for a specific Client infrastructure over the same ingress edge trunk interface of the concerned ToR. It is also critical to maintain the Layer 3 segmentation for each Client extended within the VXLAN Fabric (multiple L2 VNIs with multiple L3 VNIs per Client infrastructure). However, it is important to mention that for Clients to access the Provider’s Service Cloud, each Tenant will consume one Provider VLAN for their L2 segment and another for their L3 segment.

Consequently, this must be used with caution with regard to the consumption of the Provider’s VLAN resources.

VLAN Overlapping for “private” Client VLAN’s

To address the VLAN overlapping such that the 4k C-VLAN’s are spread across the Provider infrastructure, the Dot1Q VLAN tunnel used as the Backbone VLAN is unique (per ToR) for each Client (1001, 1002, 1003, etc..). Each B-VLAN is mapped to a dedicated and unique Layer 2 VNI which also carries within the VXLAN header the original Dot1Q tags of each and every Client segment.

Note: VLAN ID are per Leaf node locally significant, consequently, it is possible to reuse the same VLAN ID for a different Tenant as long as they are locally attached to distinct Top-of-Rack switch and these B-VLAN are mapped to a different and unique VN ID.

As discussed previously, for the Q-in-Q transport, the VXLAN Fabric is not aware of the existence of any C-VLAN per-se, hence there is no C-VLAN to configure onto the Fabric, as illustrated in Figure 6, but the B-VLANs need to be configured. The original “private” C-VLAN’s are kept fully “hidden” from the rest of the network infrastructure.

However, for “public” C-VLAN’s aiming to be routed to the Service Cloud, these particular VLAN’s must be created like any traditional access VLAN as they are not part of the Q-in-Q tunneling encapsulation. As illustrated in the configuration sample below (Figure 6), Client VLAN’s 10, 20 and 30 are created to be used respectively for further forwarding actions controlled by the Fabric itself.

Figure 6: vlan-configuration-with-mapping-to-l2-vni-as-well-as-the-l3-vni

Figure 6: VLAN configuration with mapping to L2 VNI as well as the L3 VNI

“Client Service VLAN’s” can now be routed like any other Native Dot1Q VLAN mapped to an L2 VNI. All VXLAN Layer 2 and Layer 3 services are available for those Client segments.

  • Specific “public” C-VLAN’s can be routed to Layer 3 Service Clouds.
  • The Layer 3 Anycast Gateway feature can hence be leveraged to each Client for that specific “public” C-VLAN.
  • Layer 3 segmentation between Tenants is addressed using the traditional VRF-lite transported over a dedicated L3 VNI as shown in the configuration sample in Figure 4.

VLAN Overlapping for “public” Client VLAN’s

To address the VLAN Overlapping for the “Client Service VLAN’s,” the service, “Per-port VLAN mapping,” is leveraged at the ingress port level to offer a one-to-one VLAN translation feature with a unique Provider VLAN.

Figure 7: interface-vlan-mapping-per-port-vlan-translation-for-native-c-vlan

Figure 7: Interface VLAN mapping per port VLAN Translation for native C-VLAN

As the result, each “public” C-VLAN (original or translated) can be mapped to its respective Layer 2 VNI, eliminating the risk of any VLAN IDs overlapping with another Client’s infrastructure. Since it’s a one-to-one translation between the original C-VLAN and a unique Provider VLAN, although this feature is very helpful, it must be considered only for a limited number of C-VLAN’s (4k VLAN’s per shared Bridge Domain). In the context of Service Providers in general, with a hundred or a thousand clients, this per-port VLAN mapping solution alone may not be scalable enough for a large number of originated Client VLAN’s. Hence, the need for the parallel transport using Q-in-VNI, leveraged to all the other isolated “private” C-VLAN’s that do not require any Services from the Hosting Provider, as discussed previously. 

Consequently, from each set of “private” C-VLAN’s, each Client must provide at least one C-VLAN ID for accessing the Hosting Provider network. The “Client Service VLAN’s” can then be routed within the Fabric and can, above all, benefit from the dedicated or shared Layer 3 Service Cloud.

Figure 8: tenant-connections-to-service-cloud

Figure 8: Tenant connections to Service-Cloud

Client Access to the Provider Service Cloud

To route the “Client Service VLANs” to a Service Cloud, the corresponding VNIs are extended and terminated at a pair of Border Leaf Nodes that provide direct connectivity with the Service Cloud gateway. On this physical interface connecting the Cloud network, a logical sub-interface is created for each Tenant as depicted in Figure 8.

Each sub-interface is configured with a unique Dot1Q encapsulation established with its peer network. The Dot1Q tag used in this configuration is not directly related to any encapsulated tag from the Access Interfaces (Client VLAN attachment). However, the same Dot1Q tagging is used on the reverse side (Provider Service Cloud Gateway) in order to provide the L3 segmentation peering between each Client network (Tenant) toward their respective private or shared Service Cloud (Figure 9 & 10)

VXLAN Border Leaf Sub-interfaces

VXLAN Border Leaf Sub-interfaces

Accordingly, the first solution described above requires a second physical interface per Client to be configured for selective C-VLAN’s to be routed. As a result, this solution doubles the number of physical access interfaces consumed from the Provider infrastructure, which may have an impact on the operational expenses. In addition, it is important to remember that from a scalability point-of-view, additional Provider VLAN’s to route these Client VLAN’s outside their private infrastructure are used, which are not unlimited in number.

Selective Client Service C-VLANs using Ethernet Flow Point (EFP)

For keeping the consumption of access interfaces for locally attached Client infrastructures as low as possible, a more sophisticated method is utilized in conjunction with the Cisco ASR9000 or the Cisco NCS5000, which are deployed as the selective access gateway for accessing the Service Cloud. This selection of the concerned Client Service VLAN’s is centralized and is performed at a single interface (inbound interface of the cloud service provider router) and hence, offering a much easier access for configurations. Thanks to the Ethernet Virtual Connection (EVC) and Ethernet Flow Point (EFP) features. Obviously, the same WAN Edge Router used to access the Provider cloud or for WAN connectivity in general, can be used for this selection purpose. It doesn’t have to be dedicated.

Note: Already quickly described a while back in post 10

10 – Ethernet Virtual Connection (EVC)

An Ethernet Virtual Connection (EVC) is a Cisco carrier Ethernet equipment function dedicated to Service Providers and large enterprises. It provides a very fine granularity to select and treat the inbound workflows known as service instances, under the same or different ports, based on flexible frame matching.

 The EFP is a Layer 2 logical sub-interface used to classify traffic under a physical interface or a bundle of interfaces. It represents a logical demarcation point of an Ethernet Virtual Connection (EVC) on a particular interface. It exists as a Flow Point on each interface, through which the EVC passes.

With the EFP, it is therefore possible to perform a variety of operations on ingress traffic flows, such as routing, bridging or tunneling the traffic through many ways by using a mixture of VLAN IDs, single or double (Q-in-Q) encapsulation, and Ether-types.

Figure 11: ethernet-flow-point-for-selective-b-vlan-c-vlan-toward-l3-vpn

Figure 11: Ethernet-F0low-Point (EFP) for Selective B-VLAN, C-VLAN toward L3 Service (VRF)

In our use-case, the EFP serves to identify the double tag, and to associate the Backbone VLAN (outer VLAN tag) identifying the Client-VLAN (outer VLAN tag) of choice for a particular Tenant, in order to route the later identified frames into a new segmented Layer 3 network.

Figure 11 represents the physical ingress Layer 2 interface of the ASR9000 or NCS5000 receiving the double tag frames from one of the VXLAN Border Leaf Nodes. Each double tag is identified by the Backbone VLAN and one Client VLAN. The C-VLAN is selected and added to a dedicated Bridge Domain for enhanced forwarding actions. In addition to other tasks such as bridging back the original Layer 2 to a new Layer 2 segment, to address our particular need, a routed interface can be added to the relevant Bridge Domain, offering the L3 VPN network to that particular Tenant for accessing their Service Cloud (Tenant 1 & 3). However, if the requirement is a simple Layer 3 service, the routed interface can be directly tied to the sub-interface (Tenant 2).

Figure 12: client-infrastructure-access-simplified

Figure 12: Client Infrastructure access simplified

From the point of attachment of the Client infrastructure to the Provider network, the relevant Access Interfaces from the Top-of-Rack switch are simplified to a single logical ingress approach as shown in Figure 12. The selection of the Client-VLAN’s to be routed is completed outside the VXLAN domain at the Service Cloud gateway that connects to the Border Leaf Nodes, which makes it a centralized process (Figure 14). This significantly simplifies the physical configuration from each Client network’s end.

The function of the Ethernet Flow Point offers flexible VLAN matching. When the traffic coming from the Fabric hits the Layer 2 interface transport of the router, the TCAM for that port is used to find out which particular sub-interface the filters match with [outer-VLAN, inner-VLAN], in order to bind, for example, the selected ingress traffic to a dedicated L3 VPN network.

Two approaches are possible. The first one is quite simple, where the selected C-VLAN is directly routed to the VRF of interest.

Figure 13: connecting-a-private-c-client-to-a-private-layer-3-service-vrf-or-l3-vpn-asr9k-and-ncs5k

Figure 13: Connecting a private C-Client to a private Layer 3 Service VRF or L3 VPN ASR9K and NCS5K

Figure 14: efp-for-association-of-double-tag-with-selection-of-c-vlan-bound-to-layer-3-services-ncs5k

Figure 14: EFT used for association of Double TAG with selection of C-VLAN bound to L3 Services (NCS5k)

Figure15: efp-configuration-for-routing-service

Figure 15: EFT Sample Configuration for Routing Service

Figure 14 depicts the logical view of the ingress interface of the gateway (ASR9000 or NCS5000) connecting each selected C-VLAN directly to a Layer 3 network. As a result, each Client can connect their private network infrastructure to one of the Layer 3 Service Clouds offered by the Provider. VRF and L3 VPN are supported by both the platforms, ASR9000 and NCS5000.

The second option uses the Bridge Domain to bind the selected network, allowing additional actions for the same data packet.

Figure16: connecting-a-private-c-client-to-a-private-layer-3-service-via-a-bd

Figure 16: Connecting a Private C-VLAN to a private L3 Service using a Bridge Domain (BD)

Figure 17: efp-for-association-of-double-tag-with-selection-of-c-vlan-bound-to-bridge-domain-or-layer-3-services-asr9k

Figure 17: EFP for association of double TAG with selection of C-VLAN bound to a Bridge Domain and/or L3 service (ASR9k)

Figure 17 depicts the logical view of the ingress interface of the gateway (ASR9000) connecting each selected C-VLAN with a Bridge Domain for access to a Layer 3 network. As a result, each Client can connect their private network infrastructure to one of the Layer 3 Service Clouds offered by the Provider.

The key added value with the Bridge Domain attachment is that additional actions can be applied to any Client’s VLAN [B-VLAN, C-VLAN], such as Layer 2 VPN services (VPLS, Dot1Q, Q-in-Q, PBB-EVPN).

Figure18: Layer-2-to-bvi-sample-asr9000

Figure18: Layer 2 to BVI Sample Configuration (ASR9000)

Beyond the Layer 3 Services

As mentioned previously, the added value of using the Bridge Domain is the ability to extend the actions beyond the L3 and the L3 VPN with a L2 or L2 VPN transport.

For example, the same double tag frame can be re-encapsulated using another overlay transport such as PBB-EVPN (Figure 20) or Hierarchical VPLS deployment. The filter can be applied for “any” inner-VLAN tag as a global match for further action. As a result, the same Client infrastructure can be maintained in isolation across multiple sites using a hierarchical DCI solution, without impacting the Client VLAN’s. Each set of Client VLAN’s can be directly bound to another double tag encapsulation method such as PBB-EVPN or Hierarchical VPLS while still supporting a very high scalability. This can become a nice alternative to a VXLAN Multipod by keeping each network infrastructure fully independent from a VXLAN data plane and control plane point of view.

Figure19: Extending-the-c-vlan-to-l2-vpn-and-l3-vpn-via-the-bridge-domain

Figure19: Logical View Extending the C-VLAN to L2-VPN and L3-VPN via the Bridge Domain

Figure 20: Logical-view-of-tenant-infrastructure-extended-across-pbb-evpn-per-tenant-service-cloud

Figure 20: Logical View of Tenant infrastructure extended across PBB-EVPN per-Tenant Service-Cloud

Network and Security as a Service Cloud

In addition to private access to a particular Service Cloud, it is also possible to offer additional “Network and Security as a Service (NSaaS)” for each Client’s host of multi-tier applications.

This feature offers a very granular and flexible solution. For example, as shown in Figure 21, multiple C-VLAN’s supporting a multi-tier application (e.g. VLAN 10 = WEB, VLAN 20 = APP, VLAN 30 = DB) are each bound to a specific Bridge Domain. Each concerned VLAN is sent to a particular Network Service node running in routed mode or transparent mode, to be treated afterward based on the application requirements (Firewalling, Load balancing, IPS, SSL off loader, etc.).

Figure 2: provides-network-and-security-services-from-the-provider

Figure 21: Providing Network and Security Services From the Provider Resources

Conclusion

As of today, Q-in-VNI support with VXLAN EVPN Fabrics is one of the best and easiest method to transport multiple isolated Client infrastructures within a data center across the same physical infrastructure. In association with the Ethernet Virtual Circuit and the Ethernet Flow Point features of the Cisco Service Provider platforms (ASR9000 & NCS5000), each and every Client infrastructure can be selectively extended to the Service Clouds offered by the Hosting Providers.

With the binding of those private VLAN’s to a Bridge Domain, each Client infrastructure can be extended using Layer 2 VPN and/or Layer 3 VPN, maintaining the same level of segmentation across multiple sites. Last but not least, the EFP feature can furthermore be leveraged to offer additional Network and Security as a Service by the Hosting Provider for any of the Client’s applications.

This entry was posted in DCI. Bookmark the permalink.

6 Responses to 34 – VXLAN EVPN Q-in-VNI and EFP for Hosting Providers

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.