Isolating Active HSRP on both sites.
You have been several to ask about details on the HSRP filtering configuration as discussed in the LISP IP Mobility article (23 – LISP Mobility in a virtualized environment), so here below is a short description of the configuration. And thanks to Patrice and Max for they help validating this configuration.
FHRP isolation is required only with LAN extension. It comes with two flavours:
- It works in conjunction with any Ingress path optimisation techniques such as LISP IP Mobility or LISP IGP Assist or intelligent DNS (GSLB) or SLB Route Health Injection tools or even a manual setting (e.g. changing the metric for a specific route). Whatever solution is used to redirect the traffic from the end-user to the location supporting the active application, in order to be really efficient and to eliminate the “tromboning” workflows, the egress traffic must be controlled to use the same WAN edge gateway as the Ingress traffic as described in this note 15 – Server to Client traffic.
- Although it depends on the distance between remote DC’s, usually it is recommended to move the whole software framework when migrating an application from one site to another, still to reduce as much as possible the “tromboning” workflow impacting the performances. In short do migrate the Web-tier with its Application and Data-base tiers, all together. As the server to server (tier to tier) traffic imposes that the same default gateway is used wherever it is located, the way to prevent the traffic to return to hit the original Default gateway is to duplicate the same gateway on each site. This is also described here 14 – Server to Server Traffic. However due to the VLAN extended between site for the it is mandatory to filter the handshake protocol (HSRP Multicast and vMAC) between gateways belonging to the same HSRP group.
In our previous example with LISP IP mobility we used CSR 1000v as default gateway inside each tenant container.
Both CSR 1000v distributed on each site offer the same Default Gateway to the server resources; hence to be both active on each site it is mandatory to filter out HSRP Multicasts and vMAC between the two locations.
This filtering setup is initiated on the OTV VDC
- Create and apply the policies to filter out HSRP messages (both v1 and v2). This config concerns VLAN 1300 and VLAN 1300 for the following test with LISP, however additional VLAN (or range) can be added there for other purposes.
- Configure ARP filtering to ensure ARP replies (or Gratuitous ARP) are not received from the remote site
- Apply a route-map on the OTV control plane to avoid communicating vMAC info to remote OTV edge devices. Indeed OTV uses a control plane to populate with its Layer 2 local MAC table, each remote OTV edge devices. This MAC table being built from its regular MAC learning, it is important that OTV doesn’t inform any remote OTV Edge device about the existence of this HSRP vMAC addresses as it exists on each site.HSRP and Nexus 1000v
By default the Nexus1000v implements a loop detection mechanism based on the source and destination MAC addresses. Any duplicate MAC addresses between the vEthernet interface and uplink ports will be dropped.
Thus in case of redundant Layer protocol such as HSRP enabled on the upstream virtual routers (e.g. CSR 1000V), it is critical to disable the loop detection on the vethernet interfaces supporting the redundant protocol.
For more details please visit :
HSRP filtering – Quick check
App1 (10.17.0.14) is currently located in DC-1 sitting on top of VLAN 1300 where CSR-106 is its local default gateway
App1 pings continually a loopback address located somewhere to the outside of the world. The show command of the outbound interface on the local CSR-106 shows the rate of 1 packet per second as expected while the CSR-103 on remote site doesn’t treat any packet.
Live Migration happens through OTV, moving App1 to DC-2 using vCenter.
Counters show the new local CSR-103 on DC-2 taking over the default gateway for the App1 on VLAN 1300
6 Responses to 24 – Enabling FHRP filter