Distributed Virtual Data Center

Some of the individuals posting to this site, including the moderators, work for Cisco.  Opinions expressed here and in any corresponding comments are the personal opinions of the original authors, not those of Cisco.

Dear readers,

The first recommendation I would like to highlight prior to go deeper through the different articles is, don’t extend the layer 2 segment beyond your physical DC if you don’t need. Keep in mind that for a better cost containment while maintaining easier IT operation, it exists different solutions to geographically (cold) migrate an application, while maintaining the same platform identifiers without extending the LAN across long distances – e.g. LISP IP mobility across subnet mode. We also need to take into consideration the distances and the transport – e.g. A dedicated 3 km fiber link between 2 DC does not necessarily suffer from the same challenges than deploying a layer 2 pseudowire or overlay over long distances.

Distributed Virtual DC aka DC Interconnect concerns not only the extension for Layer 2 to offer a solid and transparent interconnection for workload mobility (disaster avoidance), but Layer 3 transport may also be required for IP mobility, business continuity (disaster recovery) as well as intelligent IP localization services to improve the path optimization.

– Do I really need to extend my subnet outside my data center ?

– Can Layer 3 help to support application mobility between sites without LAN extension ?

– When and how can we optimize the traffic to and from outside the DC or between application tiers after a move of a machine.

I tried to summarise in the following articles most of the components and requirements imposed by Interconnecting resources spread over long distances in the following paper:

DCI or Inter-cloud Networking

Let’s go step by step on the different topics (IP Mobility, LAN Extension, SAN Extension and Path Optimization) to provide you details on each technology and above all to allow you guys to comment based on your experiences or just feel free to ask any question if you need.

Having said that, please feel free to use this blog to also provide your experiences interconnecting geographically dispersed resources between multiple data centers.

With the evolution of the DC Fabric network, I am also going to add some articles on Intra-DC Fabric solutions and how DCI solutions evolve and will evolve to better support spanned Apps resources from site to site.

Keep cool and don’t be shy, it’s a friendly blog 🙂

Remember that above all, because the speed of light is greater than the speed of sound, some folks appear brilliant before they sound like an idiot  !…


6 Responses to Distributed Virtual Data Center

  1. Peter says:

    Hi Yves,
    It looks like the link (DCI or Inter-cloud Networking) on this page is broken
    Broken Hyperlink:
    Best Wishes,

  2. Peter says:

    Hi Yves,
    I wondered if you were planning a blog post on the subject of DCI specifically in the context of ACI.
    And thanks for providing a link to that white paper.

    • Yves says:

      Absolutely 🙂
      There are multiple options and scenarios that will be elaborated.
      I will post a couple of articles soon.
      Thank you, yves

      • Vincent says:

        Hello Yves,

        If you want to use ACI with 9336PQ as spines and 9396PX as leaves (with FEXes connected to these leaves) and you want to have ACI present on two different sites separated by an MPLS network, which design would you advise?
        – Stretched fabric with 3 APICs in total?
        – Dual-fabric with 6 APICs in total?
        –> I thought dual-fabric would fit because there will be a L3 separation between both DC. But does it mean you need OTV to stretch the L2 VLANs across the MPLS network? Does it mean additional utilization of N7K or ASR? Isn’t there a solution with only the N9K?

        Thank you in advance,


        • Yves says:

          Hi Vincent,

          Let me try to be very succinct, I will come back soon with a series of articles on ACI

          The 2 options are valid, but Stretched fabric requires more caution and imposes some rules.
          As of today, for the stretched Fabric you need a partial mesh design like explained in post 29 http://yves-louis.com/DCI/wp-content/uploads/2015/06/transit-leaf-1.png. And you need 40GE fiber point-to-point from Spine to Leaf layer.
          How to address this design with an MPLS core?
          We have tested and validated this scenario using EoMPLS port X-connect offering the speed adaptation (40GE < => 10GE EoMPLS < => 40GE) as well as the pseudo-wire link (EoMPL). The APIC cluster being stretched across the two sites with the 3 members (2+1). Keep in mind that the maximum latency should not exceed 10ms between Spine node (Fabric A) and Leaf node (Fabric B).

          For the Dual-Fabrics (2 independent APIC cluster), you can enable OTV (N7k or ASR1k) from a pair of vPC Border leaf nodes from each site and initiate the Overlay over the MPLS core. Or you can also simulate a vPC double-side back to back from each vPC Border Leaf nodes using EoMLS as pseudo-wire.

          Kind regards, yves

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.