Using Bridge Domain Interfaces on Cisco ASR-1K Routers

I am replacing an old Cisco 3945 router with a new ASR-1001X. The 3945, which has three gigabit Ethernet interfaces, has one connection to two service providers, and a single tagged link back to the network core carrying the traffic of a few different IP subnets. The ASR-1001X has six gigabit Ethernet interfaces, so when replacing the 3945 I wanted to introduce some redundancy into the network by utilizing two physical links back to the core, with each link going to a separate physical switch. This is a great use case for some kind of MLAG technology, but what if the upstream switches don’t support MLAG?

Bridge domain interfaces in IOS-XE can resolve this situation. BDIs are somewhat of a replacement for the old BVIs in classic IOS. However, BDIs are much more feature-rich and capable than BVIs, and have all kinds of extended use cases. Bridge domains are a part of the Ethernet Virtual Circuit functionality in IOS-XE, more fully described here.

For my current needs, I am going to be replacing BVI functionality with BDI. This allows for an IP address to be terminated on the router, while having both links available for failover in case one link goes down. Only one link at a time is usable due to spanning-tree, but a single link can fail with a minimum amount of downtime (on the order of a few seconds when using RSTP).

Enable STP processing and loopguard protection on the router with the following commands:

spanning-tree mode rapid-pvst
spanning-tree loopguard default

Loopguard isn’t strictly necessary, but can offer an additional layer of protection for your network.

[UPDATE: When I first wrote this post, I labbed it up in VIRL. When I went to deploy it on an actual ASR-1001X, the spanning-tree commands did not work. As I found out, this is because this functionality is not included in the ipbase license. You need advipservices or higher to follow these steps because you will need spanning-tree support to make this work. Without spanning-tree, the same MAC address is presented to both uplinks, and your upstream switch will experience MAC flapping because it sees the same MAC address on multiple ports simultaneously.]

The ports on the upstream switches are configured as standard 802.1Q tagged ports. The router is configured with service instances and bridge domains to classify and handle the incoming traffic. Here is an example configuration under a physical interface on the ASR-1001X:

interface g0/0/1
 no ip address
 service instance 100 ethernet
  encapsulation dot1q 100
  rewrite ingress tag pop 1 symmetric
  bridge-domain 100
!
 service instance 200 ethernet
  encapsulation dot1q 200
  rewrite ingress tag pop 1 symmetric
  bridge-domain 200

The service instance command associates the physical interface with an Ethernet Virtual Circuit (EVC) construct in IOS-XE. The encapsulation dot1q command says that any frames received on the physical interface that carry that particular tag will be processed according to this service instance.

The rewrite ingress tag command (as configured in this example) will remove the VLAN tag before processing the frame further, since it is not necessary for this particular application of BDI. The ‘pop 1 symmetric’ portion of the command causes the router to remove the outer VLAN tag before it sends the frame to the BDI, and to re-introduce the VLAN tag as the frame moves from the BDI back to the physical interface. If you were performing QinQ, you could set the value to 2, for example.

Finally, the bridge-domain configuration specifies the BDI to use. In my example, I matched all of the numbers in each configuration stanza as a best practice for good configuration readability, but this is not a requirement. Each of the three values (service instance, dot1q tag, bridge-domain) are completely independent. This is to allow for more interesting bridging options within the realm of Ethernet Virtual Circuits.

You can use the exact same configuration on multiple interfaces, or you can specify that certain VLANs will only be processed on certain links. For example, you could configure a service instance for VLAN 300, and place it only on interface g0/0/2, and not on g0/0/1. You can additionally use per-VLAN spanning-tree values as a form of traffic engineering. For instance, you could either modify the per-VLAN spanning-tree cost on the router, or the port- priority on the upstream switch, to specify that under normal conditions, some VLANs use one link, and other VLANs use another link. Just be careful to not oversubscribe your links so that if there is a failure, all traffic can still be carried across the surviving link(s).

Finally, configure the BDIs:

interface BDI100
 ip address 10.10.10.1 255.255.255.0
 no shutdown

interface BDI200
 ip address 10.20.20.1 255.255.255.0
 no shutdown

You can use the command show spanning-tree vlan X to verify the redundant links from a STP point of view. Trying pinging a few addresses in the same subnets. You can troubleshoot connectivity with show bridge-domain X and show arp. The first command will reveal if the destination MAC was learned on a particular interface (similar to show mac-address table on a switch), and show arp will reveal if the ARP process was successful for a particular IP address. I had some interesting issues during configuration on virtual equipment for a lab proof-of-concept, and these commands helped isolate where the issue was. In the virtual case, simply rebooting the virtual router solved the issue.

Someone reading this might be critical of relying on STP for redundancy instead of using a modern solution like MLAG. This particular solution offers a level of redundancy that does not require MLAG. The tradeoff is a few seconds of dropped traffic if STP has to reconverge around a failed link. As with all things, the tradeoff primarily involves money, and using the resources you have available to solve business needs as best as you can. This solution still beats having a single physical link with no redundancy. Previously, if the single link failed, it would mean an immediate trip to the datacenter. With the new redundancy, a failed link still probably means a trip to the datacenter, but maybe not in the middle of the night.  :-P