Hot networks served chilled, DCNM style
When I started this blog for Data Center Interconnection purposes some time ago, I was not planning to talk about network management tools. Nevertheless, I recently tested DCNM 11 to deploy an end-to-end VXLAN EVPN Multi-site architecture, hence, I thought about sharing with you my recent experience with this software engine. What pushed me to publish this post is that I’ve been surprisingly impressed with how efficient and time-saving DCNM 11 is in deploying a complex VXLAN EVPN fabric-based infrastructure, including the multi-site interconnection, while greatly reducing the risk of human errors caused by several hundred required CLI commands. Hence, I sought to demonstrate the power of this fabric management tool using a little series of tiny videos, even though I’m usually not a fan of GUI tools.
To cut a long story short, if you are not familiar with DCNM (Data Center Network Manager), DCNM is a software management platform that can run from a vCenter VM, a KVM machine, or a Bare metal server. DCNM is a general purpose Network Management Software (NMS) and Operational Support Software (OSS) product targeted at NX-OS networking equipment. It focuses on Cisco Data Center infrastructure, supporting a large set of devices, services, and architecture solutions. It covers multiple types of Data Center Fabrics; from the Storage fabric management, where it comes from originally, with MDS 9000 and Nexus platforms, to the IP-based fabric, to the traditional Classical STP network or even legacy FabricPath domains, and recently Media networking. But more importantly, and relevant to this blog, it also supports modern VXLAN EVPN Fabrics. It enables the network admin organization to provision, monitor, and troubleshoot the data center infrastructure (FCAPS*). DCNM relies on multiple pre-configured templates used to automate the deployment of many network services and transport such as routing, switching, VXLAN MP-BGP EVPN, Multicast deployment, etc. We can also modify or create our own templates, which makes DCNM very flexible, going beyond a limited hard-coded configuration. And last but not least, DCNM offers a Power-On Auto Provisioning (POAP) service process using the fabric bootstrap option, allowing network devices to be automatically provisioned with their configuration of interest at Powered on, without entering any CLI command. Finally DCNM contributes to the Day0 (Racking, Cabling, power-on – though zero bootstrap configuration), Day1 (POAP, VXLAN EVPN Fabric builder, Underlay, Overlay) and DayN (Network and host deployment, firmware updates, monitoring, fault-mgmt) operations of Network Fabrics.
* FCAPS is a network management framework created by the International Organization for Standardization (ISO). FCAPS groups network management operational purposes into five levels: (F) fault-management, (C) configuration, (A) Accounting, (P) performance, and (S) security.
For this article, I will just cover a small section of DCNM 11, focusing on VXLAN EVPN and Multi-site deployment. And, because of the Graphical User Interface, I thought that a series of videos could be more visual and easier to understand than a long technical post to read.
However, if you need further technical details about DCNM 11, there is a very good User Guide available from this public DCNM repository
I have divided the deployment of a VXLAN EVPN Multi-site infrastructure into five main stages. I’m not certain this is the initial thought of the DCNM Dev team, but this is how I saw it when using DCNM 11 for the first time.
- Installation of DCNM 11
- Building multiple VXLAN EVPN Fabrics
- Extending the VXLAN Fabrics using VXLAN EVPN Multi-site
- Deploying Network’s and VRF’s
- Attaching Hosts to the VXLAN Fabric
DCNM 11 installation
The installation of DCNM 11 is very fast, however it requires two different steps: The OVF template deployment and the DCNM Web Installer. Notice that instead of the OVA installation, it is also possible to install the ISO Virtual appliance if you wish to install DCNM on KVM or Bare Metal computer. Let’s talk about on the OVA installation, but if needed, all installation and upgrade details are available at cisco.com.
The first action for installing the OVA is to download the dcnm.ova file from CCO. You can try DCNM 11 for free for a certain period of time. Yes, there is a 30 day full Advanced Feature Trial license!
If you are not yet familiar with VXLAN EVPN, with or without VXLAN EVPN Multi-site, or if you are a bit anxious about risks of configuration errors, you will definitely enjoy this software platform.
A few comments on the installation. Firstly, the OVF template comes with 3 networks:
- One interface network for the DCNM management itself
- A second interface for the Out-Band-Management of the Fabric
- A third one that provides In-Band connection to the fabric. It will be used for communicating with the control plane of the fabric, for example to locate the end-points within the VXLAN fabric.
If you are running the vCenter web client, at the end of the OVA installation, a pop-up window will propose to start the DCNM Web Installer automatically. Otherwise, you can start the Web Installer using the management IP address, associated with port 2080 (http://a.b.c.d:2080) as shown in this 1st video below.
Secondly, the DCNM Installer will inquire to choose the installation mode corresponding to the type of network architecture and technology you are planning to deploy. Consequently, select the “Easy Fabric” mode for VXLAN EVPN fabric deployment and automation. It will then ask for a few traditional network parameters such as NTP servers, DNS server, Fully Qualified Host Name, so be prepared with these details.
Subsequently, you need to configure the network interfaces.
- dcnm-mgmt network:
This network provides connectivity (SSH, SCP, HTTP, HTTPS) to the Cisco DCNM Appliance. Associate this network with the port group that corresponds to the subnet that is associated with the DCNM Management network.
This one should be already configured from the previous OVA installation.
This network provides enhanced fabric management of Nexus switches. You must associate this network with port group that corresponds to management network of leaf and spine switches.
This network provides in-band connection to the fabric. You must associate this network with port group that corresponds to a fabric in-band connection.
This network will be essentially used to communicate with the Control plane of the VXLAN Fabric in order to exchange crucial endpoint reachability and location information. Actually, we don’t need this interface for the deployment of the VXLAN EVPN fabric per se, but it will be useful later, for example, for a feature called Endpoint Locator. I didn’t configure this network interface for the purpose of this demo.
I will try to add more videos to demonstrate further DCNM features such as Endpoint Locator or NGOAM across Multi-sites. As a consequence, I will configure the In-bound network interface when appropriate.
The 1st video below is optional, it describes how to install the OVA file and show the configuration of the DCNM Web installer. If you are already familiar with the OVF template deployment, you can skip this video.
Video 1: DCNM 11 OVA Installation and WEB Installer
Building multiple VXLAN EVPN Fabrics
This second video demonstrates how DCNM 11 can be leverage to deploy VXLAN EVPN- Fabric in the context of multiple sites interconnected using a routed Layer 3 WAN.
The first action is to create the network underlay for each Fabric. There are several profiles available to build the DC network of your choice. The one we need to select for deploying a VXAN EVPN Fabric is called “Easy Fabric”, which is the default architecture we previously configured during the Web Installer initialization. Most of network parameters are already provisioned by default. The only values to manually enter are the AS Number and the Site number for each VXLAN domain. The rest can be left by default. Nonetheless, DCNM offers the flexibility to change any parameters by the ones that the network manager wishes to practice. The video demonstrates the usage of particular ranges of IP addresses for the underlay components. Additional options are available if needed, for example, to enable Tenant Routed Multicast in order to optimize the routed c-multicast traffic across the VXLAN EVPN fabric. Another great added value of DCNM is to automate the deployment of multiple fabric devices using POAP bootstrap configured from the centralized DCNM platform. As a result, there is no need to connect slowly switch-by-switch using each physical console port in order to configure the manageability access via the CLI language. On the contrary, all the switches automatically discovered by DCNM and identified by the network manager as part of the VXLAN Fabric, will download their configuration automatically during their power cycle.
Even the respective role of each device (Spine, Leaf, Border Leaf) is automatically recognized, although it can be modified on the fly if needed, as shown in the video for the Multi-site Border Gateway function.
Therefore, the only requirement prior to start the DCNM configuration, is to collect the serial numbers of the switch devices, so they can be identified and selected accordingly to be assigned with their corresponding network management identifiers. Another slight TIP that can help prior to the creation of the fabric, is to draw a high-level topology figure with the physical interfaces used for the uplinks as well as connections toward the WAN edge devices crossed-out for inter-fabric network establishment.
For Network managers who want to check the outline prior to deploy the configuration toward the switches, it is possible to “preview” the configuration given in a CLI format, hence he knows exactly what configuration is pushed toward the switch. When a configuration is deployed, it is also possible to check the status of it and check, in case of error, where the config stops and why. In addition, if the DCNM configuration is different than the one expected (for example, if someone changed one parameter directly using the CLI), DCNM will highlight the differences with the non-expected CLI from a DCNM point of view.
This video also includes how to connect each fabric to the WAN. To achieve this inter-site Layer 3 connectivity, we need to create an External Fabric that acts for the L3 WAN/MAN. By electing a seed unit from the L3 network and defining the appropriate number of hops from there, the Layer 3 topology with all existing routers of interest will be automatically discovered.
Video 2: Building Multiple VXLAN EVPN Fabrics
Video3: Fabric interconnection with VXLAN EVPN Multi-site
With the previous video 2, we have seen how the external Layer 3 Core connects both VXLAN EVPN fabrics. This following stage illustrates the initialization of the VXLAN EVPN Multi-site feature in order to interconnect these 2 VXLAN EVPN fabrics over the top of the Layer 3 Core via their respective Border Gateways.
VXLAN EVPN Multi-Site uses the same physical border gateways to terminate and initiate the overlay network inside the fabric, as well as to interconnect distant VXLAN EVPN domains using integrated VXLAN EVPN based network overlay.
VXLAN EVPN Multi-Site is a hierarchical solution that relies on the networking design principles to offer an efficient solution extending layer 2 and layer 3 segments across multiple EVPN-based Fabrics. For additional reading on VXLAN EVPN Multi-site, I recommend you to look at the previous post (37) as well as the public white paper that covers deeply VXLAN EVPN Multi-site.
Currently, DCNM 11.0(1) requires you to configure the underlay and overlay network establishments between each device (From each BGW to its distant peers), but, the Dev Engineers are already working on a new version that simplifies further the automation of the Multi-site creation. I will post a new video as soon as I can test it.
Video 3: Fabrics interconnection with VXLAN EVPN Multi-site
Video 4: Deploying Network and VRF
This phase shows the creation of the layer 2 and layer 3 network segmentation and how they are pushed to the Border Gateways. The order of this stage might be different and the creation of Network and VRF can be initiated after the attachment of the compute nodes. The concept to build a network overlay is quite simple. We can leave the proposed VXLAN Network ID which is consumed from the DCNM pool. We can provide a new name for that network or leave as is, etc. What is required is to map the concerned VLAN ID for that specific Network. Finally, we need to associate the network with its Layer 3 segment (VRF). Either the VRF can be selected from a list of existing VRF’s previously created or we have to build a new one from the same window. Then, the default gateway (SVI) for that particular network can be configured under Network profile. All the configuration ready to be deployed takes few seconds.
When all desired Network have been finally created, from the topology view under control network tab, we must select the devices with the same role and deploy the Networks of choice. During this video, I only selected the Border Gateway devices. Indeed, the deployment of networks for the compute leaf nodes are discussed in the next and last video, after the vPC port-channel has been created.
Notice that prior to deploy any configuration to a switch, it is always possible to preview the configuration before it is pushed to the targeted devices, if you wish.
Video 4: Deploying Networks and VRF’s
Video 5: Host Interface deployment and Endpoint discovery
This last video concludes the demonstration of deploying an End-to-End VXLAN EVPN Multi-site infrastructure, with ultimately the configuration of the vPC port-channels where hosts are locally attached to the fabric and how to allow the concerned networks into those vPC port-channels.
It also demonstrates how DCNM 11 can integrate a VMware topology onto its dynamic topology views, discovering automatically the vCenter that controls the host-based networking on the fabric to show how virtual machines, hosts, and virtual switches are interconnected.
In this video, when the configuration of the port-channels is completed, the intra-subnet and inter-subnet communication is verified using a series of tests, inside the fabric and across the Multi-site extension.
Video 5: Host Interface deployment, vCenter discovery and Verification
After building the physical deployment of the fabric (racking, cabling, powering all gears), and after collecting the serial numbers and drawing the uplink interfaces with their respective identifiers into a high-level topology diagram, it will take only a couple of hours to build the whole multi-site VXLAN EVPN infrastructure, ready to bridge and to route traffic across multiple VXAN EVPN Fabrics in a multi-tenancy environment.
I only recorded each video once because all first attempts to use DCNM to deploy the VXLAN EVPN multi-site were a success. The communication from site to site worked immediately. BTW, I only found the DCNM 11 User Guide to add the pointer in this post after I ran all the DCNM configurations. It’s a rumor, please don’t repeat, but the Engineers working on DCNM said that several crucial improvements will come with the major release (MR) coming soon. I am very eager to test what they have done :). I will demo these improvements here as soon as I can get a solid beta version to try.
I’ll keep you posted. However, in the meantime, please feel free to bring your comments.
Thank you for reading and watching