7 – Native Extended Layer 2

The diversity of services required in a cloud computing environment and the constraints related to the type of applications moving over the extended network require a set of diversified DCI solutions. Cisco offers three groups of technical solutions that meet these criteria:

Point-to-Point Interconnections: For point-to-point interconnections between two sites using a dedicated fiber or a protected dense wavelength-division multiplexing (DWDM8)mode, Cisco offers Multi-Chassis Ether Channel (MEC) solutions that allow multiple physical links of a Port Channel to be distributed over two different chassis. MEC is available through two approaches:

  • A single control plane managing the two chassis: This method is available on the Catalyst 6500 series with the function of Virtual Switching System (VSS).
  • An independent control plane: This option is available on Cisco Nexus 5000 and Cisco Nexus 7000 Series switches with the function of a virtual Port-Channel (vPC).

These options can provide active physical link and edge device redundancy to ensure the continuity of traffic between the remote sites, for type 1 and type 2 faults. Both approaches eliminate the use of the Spanning Tree protocol to control the loops. In addition, the MEC solution improves bandwidth utilization.

MEC Solution


Multiple Site Interconnections: For multi-site interconnections using optical links or using a DWDM service running in protected mode, FabricPath (TRILL) can quickly and seamlessly connect multiple remote sites in a fabric fashion, remove the extension of the Spanning Tree Protocol between remote data centers, and offer huge scalability compared to classical Ethernet. FabricPath is available on the Cisco Nexus 7000 Series Switches, with upcoming availability on Cisco Nexus 5500 Series9 Switches. FabricPath can also be used in a point-to-point model, which supports tying additional data centers into the cloud without impacting the production network or affecting existing connections.

Security: Traffic sent through the DCI Layer 2 extension can also be encrypted between a Cisco Nexus 7000 Series Switch deployed at the network edge using the Cisco feature called TrustSec (CTS). With CTS, encryption is performed by the hardware at line rate without impacting the performance or the latency of the traffic crossing the inter-site network. CTS offers a rich set of security services including the confidentiality of data transmitted over the WAN via a standard encryption mechanism (802.1AE).

This entry was posted in DCI. Bookmark the permalink.

4 Responses to 7 – Native Extended Layer 2

  1. pieterj says:

    Hi Yves,

    We have two DC’s connected via Dark Fibre, approximately 30km apart and have configured Fabricpath for the DCI. This network has been running like this for over a year now. This was done mostly because we wanted to L3 across the links and FP supported this natively while VPC did not. Our Cisco partner that support our network wants us to convert this back to a VPC connections for DCI claiming it will be easier to manage.

    We currently run FP with OSPF between the DC’s, OSPF links have been configured using multi topology feature in order to create 4 typologies so that we can run P2P links and reduce the impact of MDR failure. Anycast HSRP is configured towards the VMWare environment.

    Would you support this recommendation to go to VPC or would we be better off staying with the FP design between 2 DC’s.

    Are there any benefits to run VPC on the DCI instead of FP?

    Also in a 2 DC topology, would you consider OTV instead of FP?


    • Yves says:

      Hi Pieter,

      Difficult to give you a definitive answer without going into the details of your application workflows and designs of FP topology itself.
      FP has not been natively designed to offer a DCI solution. There is no real demarcation at the DCI edge in your design. MAC-in-MAC encapsulation happens from end-to-end between Leaf nodes spread across different DC. It’s a stretch fabric from a control plane and data plane point of view. Having said that, in term of manageability it’s a single FP domain (meaning it’s certainly easier to operate as a single fabric but it could be more challenging to debug in case of failure! Isn’t it? that depends from which angle you look at the management of the fabric). However, on the other hand, FP is very solid and very stable, very scalable and it is easy to troubleshoot. We have good experiences with FP in stretched fabric mode deployed across 2 locations separated by short distances. As a result, in some dual-site designs, FP can be leveraged to extend the fabric across few sites. But it is important to understand the shortcomings in a DCI context, and it looks like you already got all of them.

      Note that most of FP deployments across multi-sites have been designed accordingly with a limited number of sites to interconnect, over a solid and stable physical transport and with very limited usage of multicast transport for the user applications.

      I don’t know your experience with FP Stretch Fabric, but my first answer, assuming you don’t have much data multicast traffic handled across the 2 separated locations, assuming that the DCI transport fibers are addressed with DWDM in protected mode with remote port-shutdown, leveraging the multi-topology as you mentioned (very efficient MDT separation btw), honestly I don’t see any reason to migrate to a vPC dual-sites (even-though L3 dynamic routing over vPC is now supported with N7k/F3 line cards).
      From a manageability point of view, it’s a balance between a single FP Stretched Fabric without DCI demarcation to configure, and a DCI design with a dedicated vPC dual-sides back-to-back configuration where VLAN hands off for L2 extension purposes. IMHO in your context there is no much to save migrating from FP to vPC for that specific multi-sites deployment. You may also think of a future need to add a third site to be interconnected with the current dual-sites, for which vPC will not be adequate.
      However if your Cisco partner is proposing this migration to vPC, there are certainly good reasons. Please feel free to comeback to me directly to discuss further about fabric/DCI management purposes that I’m maybe missing.

      On the OTV side, OTV is definitely superlative in terms of DCI model, features, simplicity, troubleshooting and maturity addressing the DCI’s Enterprise requirements. It is agnostic of the transport between sites (L3) and within sites (L2 STP, vPC, RSTP, MST, FP, VXLAN, etc..). You can initiate the OTV service from any locations within the DC. It relies on standard protocols (VXLAN for DP and ISIS for CP). And above all OTV has been built to address natively all DCI concerned. As a consequence, if there is a need to migrate to a sturdiness DCI solution, easy to operate, the choice should go to OTV interconnecting two or more distant FP domains.

      All that said this is not necessarly a definitive answer. I’m giving you my thoughts in regard to the details that I got from your side. As mentioned above, several other parameters may be taken into consideration for possible different choices.

      Hope that helps a bit,


  2. arby-surya says:


    I’m hijacking this thread because I’m working on an DCI design and we are thinking about fabricpath.

    The current design is about a dual site DCI with VPC+bpdu filtering, the very common stuff.

    Right now we are considering a relocation+full L2 extension, maybe by extending L2 over a third or even a fourth site over dark fiber or xWDM so running FP over these links is not a problem (is there any distance limitation with “switchport mode fabricpath” ? I couldn’t find any information about that).

    I know there are customers running FP for DCI and it works well. What are the biggest drawbacks / concerns I should care about ? I saw above the concern about multicast transport (to avoid the hairpining of the traffic if you have more than 2 sites because multidest. traffic use FP trees).

    Also I’m wondering about FHRP filtering over fabricpath (if I use a PACL on DCI links, is the asic smart enough to drop hsrp hellos embedded in a fabricpath frame ? is there any lookup into the payload ? or would a VACL work on a vlan in FP mode ?)

    We are considering to use Nexus 5600 for this need. OTV is not an option for us.


    • Yves says:

      Hi, sorry for the delay.

      Technically, there is no limit in term of distances for the FP control plane itself, but it’s a point to point technology, hence you need to be very careful with remote link failure. Make sure xWDM is provided with remote port shutdown and in protected mode (in case you want to leverage EoMPLS pseudowire for example)
      As previously mentioned, there is only one root per MDT, hence BC and MC communication between two local endpoints may happen via the remote site.
      FP is not a DCI solution per se, but indeed it can be leveraged to interconnect multiple sites for short distances, limited number of sites to interconnect, no greedy Mcast Apps.

      FHRP filtering: You mentioned N5600, keep in mind that none of N5k series can filter HSRP hello msgs due to pre-defined ACLs rules (ASIC) that take precedence over user defined ACL rules. However you could somehow workaround similar behavior with HSRP group password mismatch. However a password mismatch workaround only works if FP is dedicated for the DCI links between two DCs. If you have the same FP domain stretched across the 2 far end sites (from leaf nodes in DC1 to leaf nodes in DC2), then the password mismatch workaround doesn’t work. A better way ( and recommended) to get HSRP localization is to use anycast HSRP feature for your FP network.

      If technically FP protocol is not latency sensitive (but the App is) and it could allow to build a big ring interconnecting many different DC, FP, the recommendations for use FP for DCI purposes can be summarized as follow:
      – Metro distances, Dark fibers or xWDM in protected mode and remote port shutdown.
      – Avoid more than 2 sites to interconnect (risks of routing black holes).
      – Be very careful with Apps greedy in Multicast (hair-pining between sites due to MDT root).

      Hope that helps, yves

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.