Note: Since I wrote the following articles on ASA clustering stretched across multiple locations, additional improvements have been made to address some of the concerns listed in post 27.x. Please have a look at the ASA release-notes (especially 9.5(1) and 9.5(2)).
- 9.1(4) Geographically dispersed ASA cluster up to 10ms of Latency
- 9.2(1) Validated Spanned Interface mode (L2) North-South Insertion
- 9.3(2) Spanned Interface mode (L2) – East-West Insertion
- 9.5(1) Site Specific Identifier and MAC address
- 9.5(2) LISP Inspection for Inter-site Flow Mobility
Use the last configuration guide for the updated features, not discussed in this post
https://www.cisco.com/c/en/us/td/docs/security/asa/asa97/configuration/general/asa-97-general-config/ha-cluster.html#ID-2170-000001d3
Stateful Firewall devices and DCI challenges
Having dual sites or multiple sites in Active/Active mode aims to offer elasticity of resources available everywhere in different locations, just as with a single logical data center. This solution brings as well the business continuity with disaster avoidance. This is achieved by manually or dynamically moving the applications and software framework where resources are available. When “hot”-moving virtual machines from one DC to another, there are some important requirements to take into consideration:
- Maintain the active sessions stateful without any interruption for hot live migration purposes.
- Maintain the same level of security regardless the placement of the application
- Migrate the whole application tier (not just one single VM) and enable FHRP isolation on each side to provide local default gateway (which works in conjunction with the next bullet point)
- While maintaining the live migration, it can be crucial to optimise the workflow and reduce the hair-pining effect as much as we can since it adds latency. As such, the distances between the sites as well as the network services used to optimize and secure the multi-tier application workflows amplify the impact of performances.
As with several other network and security services, the firewall is a stateful device that imposes a one-way symmetrical establishment. That means return traffic must hit the owner of the session, otherwise, the packet is dropped. Traditionally firewalls are deployed by a pair of devices in an Active/Standby manner, with dedicated layer 2 adjacency links to synchronize the states of all sessions and to probe the health of its peer. When the active firewall is stopped, the standby takes over, maintaining all active sessions stateful in a transparent manner for the application and for the end-user. As of today, most enterprises are deploying their perimeter firewalling in Active/Standby mode mainly for tightly coupled data center designs (metro distances using fiber links).
Figure 1: Typical tighly-coupled DC deployment with firewalling. The primary DC-1 attracts all the traffic for the application of interest (best metrics). By default it is expected to maintain the session workflow within the same DC.
As discussed in this previous high level post 13 – Network Service Localization and Path Optimization, as a result of state failover of network services as well as application mobility, it is not rare to see 10 to 20 roundtrips between the two sites for the same active session. This is forced by all the stateful devices in the path imposing a one-way symmetrical establishment with the return traffic. This includes the security WAN edge, IPS, SSL offloader, SLB devices as well as the default gateways between application tiers, just to list the most common stateful devices. Figure 2: The same application has moved to the secondary DC and a failover happened on the first firewall. NB: this is a basic design to keep the logic simple—usually additional stateful devices (SLB, SSL, IPS, WAAS, etc.) exist along the path, hence you can infer a longer final ping-pong effect.
If we consider that the signal propagation delay takes 1ms roundtrip to travel a 100km distance from each data center, 10 roundtrips bring almost 10ms between request and response for the same session, which might have a performance impact on the application. In the context of metro distances, it has been usually well accepted by network managers to work in “degraded” mode during maintenance windows, as these were fully controlled by the network and security organizations.
With the increased demand of virtual machines and dynamic workload mobility, it becomes challenging to control all of the component states and placement impacting the application workflow. Hence, the desire to control dynamically the optimum path to reach the application.
ASA Firewall clustering
Last year at Cisco Live in London, I presented a new concept based on firewall clustering to improve the DCI architecture; however, this enhanced solution was not yet supported due to some limitations with the ASA code (9.0) as well as the lack of testing.
Since v9.1(4) and recently version 9.2(1), the ASA clustering software has been improved to support long distances between members of a cluster (up to 10ms one-way latency) and several designs have been tested and qualified in DCI deployment scenarios. Thus, the excitement to post this article now :).
There are several detailed documents available on ASA clustering itself. Hence, for purposes of this post let’s focus only on the mechanisms that we can leverage in a DCI scenario. For further details on the ASA cluster, I recommend to read the configuration guide, which gives all details and explains the concept and nomenclature of ASA clustering. You will also find many great posts from others on the web.
Originally, ASA clustering aims to provide high-scale firewalling by stacking several physical ASA devices to form a single logical high-end firewall. All ASA devices are active and work in concert to pass connections as a single firewall.
To achieve this function, a “new” component called Cluster Control Link (CCL) was created to collapse all physical members of the ASA cluster together to form a single logical firewall. The CCL is used for the control plane, health-check, state sync, config sync as well to redirect the data plane traffic to the original owner of the session when needed. Remember, classical A/S or clustering A/A mode, it is mandatory that the return traffic hits the owner of the session, otherwise the packet will be dropped. With ASA clustering, this rule still applies, but instead of dropping an asymmetric flow, the firewall that owns the session is known by all other ASA devices, thus the packet is automatically redirected to the original owner via the CCL. The traffic is load-distributed in a clever fashion between the members of the ASA cluster.
There are two possible modes to load balance the data traffic from the upstream device across the ASA units:
- Individual Interface Mode (layer 3) using ECMP or PBR
- Spanned Ether-Channel Mode (layer 2) using LACP
Figure 3: Individual Interface Mode (left) versus Spanned Ether-Channel Mode (right) in a fully redundant deployment using Multi-chassis Ether-Channel (MEC).
In both modes, each ASA device is dual-homed using a virtual port-channel toward a Multi-chassis Ether-Channel engine (e.g. vPC) and the traffic from the upstream device is layer 3 load distributed (ECMP or PBR) for the Individual mode (left) or is layer 2 distributed (LACP) for the Spanned mode. It is not possible to mix different modes for the same cluster.
From a protocol point of view, a unique device can exist on each side of the LACP establishment between two entities.
Figure 4: A single logical device back to back for the LACP establishment.
Thus, in Spanned Mode, in order to form a logical LACP peer device, on the network side, the upstream pair of Nexus switches uses a virtual port-channel (vPC) and on the firewall side, the ASA cluster uses an enhanced LACP mode called cluster LACP (cLACP), which forms a virtual port-channel extended across all ASA units in the cluster, using the same IP address and the same virtual MAC address.
The ASA clustering deployment design is flexible. It can be deployed using a single port-channel to the same insertion point at the aggregation layer. As a result, both external (non-secured) and internal traffic (secured) traverse the same physical port-channel. The separation is achieved at layer 2 using VLAN tagging. The other method is to deploy the ASA cluster in sandwich mode between two Virtual Device Contexts (VDC) in order to physically separate external and internal traffic, each dedicated to an inside and outside port-channel as discussed in this post. This article relies on that method of sandwich mode providing solid hierarchy architecture.
Figure 5: Don’t be confused between the forwarding mode of the firewall and the interface distribution mode among the ASA cluster. The ASA cluster can be running in Routed Mode or Transparent Mode like most firewalls, and with or without multiple contexts. It is important to clarify which mode is supported. For example, if an enterprise wants the firewall to run in Transparent Mode, the only option to distribute the load among the ASA units is Spanned Ether-Channel Interface Mode.
Whichever load balancing protocol is used, there is no symmetrical algorithm that ensures return traffic re-uses the same path in reverse. Hence, it is not guaranteed to hit automatically the original owner of the session. The ASA cluster gets around this one-way symmetrical establishment by redirecting the session to its owner via the CCL.
Figure 6: TCP hand check establishment with redirection to the session owner over the CCL
- In the example above, a new session (TCP SYN) is established, hitting the ASA 2, which encodes SYN cookies with its own information (1) and then forwards the SYN packet to the next destination (2).
- When the TCP SYN-ACK is responded to, the traffic is load balanced based on the layer 2 or layer 3 source and destination identifiers and sent toward a new ASA; in our example the TCP SYN/ACK arrives at ASA 3.
- ASA 3 decodes the owner information from the SYN cookie and notices that ASA 2 is the owner (4) for that session.
- ASA 3 immediately forwards the packet to the owner unit ASA 2 over the CCL, which in turn forwards the SYN ACK via its inbound interface.
Consequently, the CCL must be dimensioned according to the speed of the inbound and outbound interfaces (e.g. 2 x 10GE for resiliency and performance).
Figure 7: To increase resiliency, it is recommended to dual-home each data interface using a port-channel split between the two upstream switches. There is one port-channel per ASA device.
ASA Firewall clustering spanned across multiple sites
Now that we understand how ASA clustering is created, let’s see how to leverage the clustering mode for a DCI solution.
For our purposes and examples, an ASA cluster formed with four physical ASA units is stretched across two DCs, with two ASA members on each site. We will keep this scenario for the whole article, but nothing prevents us from adding more ASA units (up to 16 units per ASA cluster are currently supported) stretched across three or more DCs interconnected using LAN extension (we are using OTV for multi-site LAN extension).
ASA cluster Configuration
Some of the added value of deploying the ASA cluster is that configuration is synchronized between all units, hence there is no need to manually replicate all policies on each unit, thus avoiding risk of misconfigurations.
Extending the Cluster Control Link across 2 DC
The first component to extend between the DCs is the Cluster Control Link. This CCL connection must be fully resilient, as discussed previously, and extended across sites using a solid DCI LAN extension. Each ASA uses a dedicated port-channel split between the two vPC peers. From the upstream logical switch, the port-channels are not related except that they carry the same VLAN from ASA to ASA units across the two sites. From a security point of view, it is preferable to deploy it inside the secure perimeter.
Figure 8: CCL deployment, each ASA uses a dedicated port-channel split between the two vPC peers.
As discussed above, there are 2 mode to load distribute the data traffic among the ASA members: A layer 3 mode called Individual interface mode and a layer 2 mode called Spanned Etherchannel mode. Both modes are valid for DCI with some slight different added values for one mode versus the other that will be discussed has as we move forward.
Extending the Data plane using Individual Interface Mode
The ASA units are deployed in sandwich mode between the inside and outside routers and ECMP is used to load distribute the traffic across the local ASA members. The CCL as well as the data plane VLANs are extended between sites.
Figure 9: For the purposes of this topic, the CCL is represented using a logical straightforward link (orange) extended between the two DCs; however, in the final deployment it should be distributed in a sturdy and redundant fashion as described in Figure 8.
The application data traffic (layer 2) is isolated by a layer 3 hop. The CCL is also isolated from the data workflow. However, nothing prevents network managers from collapsing the CCL VLAN with the application data VLANs within the same overlay network as the segmentation is therefore maintained at layer 2 with dot1Q tagging.
From a logical layer 3 point of view, IGP adjacency is established between the inside and outside routers through the local ASA units, as well as between sites (higher cost across sites used for disaster recovery purposes).
Figure 10: The default gateway is unique and still active in DC-1. After the migration of the application, the traffic hits the original router, returns to DC-2 in regard to the communication with the application upper tier, to finally return to DC-1. The ping-pong effect starts after the move.
- In the scenario on the left, the primary DC-1 attracts the request for the application that exists in DC-1.
- The default gateway for the application is active in DC-1 and standby in DC-2.
- The ASA-1 (far left) becomes the owner of the session and routes the packet to the application.
- In the scenario on the right, the application has moved (hot stateful live migration) and continues to respond with no interruption.
- The return traffic hits the active default gateway in DC-1 which routes the packet toward the frontend server.
- The frontend server responds to end-user via its default gateway (DF) active on DC-1.
- The DG on DC-1 distributes the packet to the ASA-2.
- ASA-2 checks the owner of the session and redirects the packet to ASA-1 over the CCL.
- The workflow exits DC-1 toward the end-user.
- The session is maintained stateful with zero interruption
Beyond the ASA clustering, the concern is that, even if the tiers of the applications are all moved to the distant DC-2, the routed communication between the tiers is established via the active default gateway still located in DC-1. Hence the traffic from the back-end to the front-end is hair-pinned via DC-1.
Consequently there is a the great interest for most of network managers to enable HSRP isolation to improve the server to server communication as shown below.
Figure 11: HSRP isolation reduces the hair pining, however the return traffic must still hit the owner of the session to maintain the establishment stateful.
With FHRP isolation techniques, server-to-server communication is routed locally, eliminating the pointless latency (far right bottom). The outbound traffic from the frontend server toward the end-user is routed to the upstream local firewall (shortest path). However, as we want to maintain the session as stateful, the local ASA-3 redirects the traffic workflow to its original owner (ASA-1) via the CCL extension.
While machines migrate from one location to another, the session is maintained stateful with zero interruption. However, although the application has moved through DC-2, the next request will hit DC-1 until we inform manually or dynamically the layer 3 network about the move. This is discussed in Part 2, next.
The added value with Individual Interface Mode is that it maintains the layer 2 failure domain in isolation from the ASA control and data planes. Traffic is routed up and down the ASA cluster. The application data VLAN can be extended in a transparent fashion using any validated DCI solution. However, the other side of the coin is that the ASA unit cannot be the first hop default gateway for the application, as another layer 3 router separates it. Thus, Individual Interface mode might be challenging for enterprises that would like the firewall to be the default gateway of the application servers. Another point to mention about Individual Interface Mode is that it doesn’t support firewalls configured in Transparent Mode.
Extending the Data plane using Spanned Ether-channel Mode
In our cluster, all ASA members share the same IP address and can be the first hop default gateway for the application (not yet qualified though). However, for the last option we need to be very cautious with the layer 2 extension and the vMAC address.
Figure 12: The ASA Cluster LACP (cLACP) is spanned across the 2 data centers. The LACP established from the vPC peers is local. A layer 3 device isolates the ASA Spanned Ether-channel interface from the application data VLAN.
From each vPC peer, a local port-channel is established on each site between the local ASA units. From the ASA cluster, a single LACP port-channel is spanned across the two distant DCs (cLACP). cLACP imposes to span the same port-channel across the same ASA cluster, therefore the same vPC domain identifier must be identical on each vPC peer.
In regard to workflow, the behavior is the same as with the Individual Interface Mode. Note a slight difference in case of failure; convergence with LACP should happen faster than with ECMP or PBR.
Figure 13: The ASA Cluster is running in transparent mode, HSRP isolation is enable improving the workflow while the session is maintained stateful.
When the ASA cluster is running in routed mode, the members share the same IP address and same vMAC address. Consequently to avoid any duplicate MAC address to appear from different switch ports, a router on each site can be added (preferred method) to separate the layer 2 data traffic between the spanned ether-channel and the LAN extension. Indeed, if the VLAN attaching the front-end servers is L2-adjacent with the firewall, then the vPC peer will detect the same vMAC address bouncing from different interfaces (toward the ASA members and from the data LAN extension, which is definitely not an expected situation in Ethernet).
Figure 14: The same vMAC is learnt from both sides of each vPC peers. A challenging situation definitely not supported by the protocol Ethernet.
To prevent this situation with the same MAC address learnt on different sides, a layer 3 gateway is inserted to separate the L2 data traffic between the spanned ether-channel and the extended data VLAN. It also prevents layer 2 loop in case of a human design mistake.
Figure 15: a router inserted between the data VLAN and the spanned port-channel isolates the duplicate MAC address.
In the figure above, the design on the left shows the layer 3 separation between the spanned ether-channel VLAN 10 and the application data VLAN 20. Only VLAN 20 is extended. On the right side, the application servers are L2-adjacent with the inside interfaces of the ASA units via VLAN 10. As a result, the vPC peers on both sites learn the duplicate vMAC address from their respective ASA units and from the extended VLAN 10, which is not acceptable.
If you are willing to offer the default gateway from the firewall (not yet qualified though), it definitely requires filtering the vMAC address between sites as well as the ARP requests.
Figure 16: Careful: Please don’t get me wrong – currently this is neither recommended nor supported. Don’t do the following until you understand the exact ramifications 🙂 .
To filter the vMAC address with OTV, you will need to perform the following on the internal interface of each OTV edge device.
The following access list will block the vMAC 1111.2222.3333 shared between all ASA units:
In addition we need to apply a route-map on the OTV control plane to avoid communicating vMAC information to the remote OTV edge devices.
Indeed OTV uses the control plane to populate each remote OTV edge device with its local layer 2 MAC table. As this MAC table is built from its regular MAC learning, it is important that OTV doesn’t inform any remote OTV Edge device about the existence of this vMAC address as it exists on each site.
However a possible drawback of the Spanned mode is that all ASA are active and share the same IP and MAC addresses, hence an ARP request will hit all units that forms the ASA cluster and all members will reply with the same source (IP and vMAC). Regarding the local ASA, the reply passes along the unique local port-channel, which is fine, however it could be tricky if the reply comes from the remote site via the DCI link. Fortunately the vMAC of the remote ASA will be blocked as described with the previous access list. Therefore the ARP reply will come only from the local ASA, which is the desired behaviour. However to reduce the broadcast traffic, you may want to filter the ARP request destined to the default gateway, across the DCI connection.
Some additional recommendations
It is recommended to enable JUMBO frame reservation and MTU cluster at least at 1600 for use with the cluster control link. When a packet is forwarded over cluster control link, an additional trailer will be added, which could cause fragmentation. Set this to 9216 to match the system jumbo frame size configured on the N7k. Hence, the MTU of the IP network inter-sites must be sized accordingly.
For a deep understanding of what is supported and what is not, please follow the inter-site Clustering guidelines recommendation in the ASA 9.2 Configuration Guide.
22 Responses to 27 – Stateful Firewall devices and DCI challenges – Part 1