Cisco: How to use CEF for load-balancing

According to Cisco’s website, Cisco Express Forwarding (CEF) is advanced, Layer 3 IP switching technology. CEF optimizes network performance and scalability for networks with large and dynamic traffic patterns, such as the Internet, on networks characterized by intensive Web-based applications, or interactive sessions.

The term Load balancing describes a functionality in a router that distributes packets across multiple links based on layer 3 routing information.

Now, putting this two terms together we obtain Load balancing with CEF. Cisco IOS software basically supports two modes of load balancing: On per-destination or per-packet basis.
In per-destination mode all packets for a given destination are forwarded along the same path. Usage of only one path will lead to a unequal usage of the lines, as the packets to the same destination will use only one line, leaving the other unused. This can be a problem in a small environment where let’s say that a location is having 2 x E1 connection. In per-destination mode, all the packets to one destination will use one E1 connection, and the other line will remain unused. In addition to this, if this one destination is a server on which most on the site’s users will connect to, the per-destination mode will lead to an exhaust of the bandwidth available on one E1 line leaving the other one empty. On the other hand, on more developed location, where there are more destination available, the per-destination method will not have too much impact about the usage of the lines, as in this case the traffic will be split for the multiple destinations over multiple paths.

The per-packet mode guarantees equal load across all links because the forwarding process determines the outgoing interface for each packet by looking up the route table and picking the least used interface. If you read until now, than most probably you will say that this method is the best and then why the per-destination mode is the default one. Some issues with the per-packet mode would be that this method will almost always result in out-of-order packets, as advised by Ivan Pepelnjak, on his blog. What I can tell you is that this out-of-order packets has not a big impact on low-speed environment where TCP stack can deal with this problem. On the other hand on high-speed environment where video or voice traffic is expected, you can have big problems and it’s better to avoid the usage of per-packet load-balancing. Also, Cisco is advising that this ensures equal utilization of the links, but is a processor intensive task and impacts the overall forwarding performance. How much will impact on the CPU / resources usage? I cannot tell exactly because this depend on the task and traffic that the device has to handle.

OK, so if this per-packet mode is not so great, than why we should use it? The answer is that in some particular topologies or environments you cannot use other method of load-balancing and you are in desperate need of such mechanism. Below you will see an example of what kind of topology can force us to use per-packet load-balancing.

Before you configure load-balancing, you have to be sure that IP CEF is enabled on your router. In case that it is disabled, please enable it:

configure terminal
ip cef

If you want to fine tune the IP CEF load-balancing algorithm you can do this with the command:

configure terminal
ip cef load-balancing algorithm “name parameter”

where for “name” you have 3 choices:
original – Sets the load sharing algorithm to the original based on a source and destination hash.
tunnel – Sets the load sharing algorithm for use in tunnel environments or in environments where there are only a few IP source and destination address pairs.
universal – Sets the load sharing algorithm to the universal algorithm that uses a source and destination, and ID hash.
Skipping the tunnel option which you should use only if you are sure that you need it, the other choices would be universal and original. Original algorithm use IP addresses to generate the 4-bit hash. On the other hand universal algorithm add a router specific information to the hash leading to a more complex development of the hash value. Since universal option is the default one, exception the case where you know what you are doing, you should not change this value.

To enable IP CEF load-balancing on per-destination base, you don’t have to modify anything, as it’s enabled by default. For per-packet mode, you have to use the following commands:

configure terminal
interface FastEthernet x/y
ip load-balancing per-packet

ip-cef-load-balancingOne scenario where I had to use per-packet load-balancing is the one below.
Let’s assume that we have client (c-ubuntu-1, 10.10.20.100) which is sending traffic to one server (s-ubuntu-1,10.10.10.100). The routing protocol is already configured in such way that CE1 and CE2 routers are announcing, through OSPF, 2 default routes (equal cost) to SC device. The rest of the routing part is assure by BGP towards PE routers and another IGP protocols, but this has no importance for the topic discussed here.

SC1#sh ip route | i 110/1
O*E2 0.0.0.0/0 [110/1] via 172.29.190.237, 02:35:46, Vlan23
[110/1] via 172.29.190.229, 02:35:46, Vlan13

On the SC devices I used L3 interface vlan 23 to connect to CE2 and vlan 13 to connect to CE1. With the CEF enabled and per-destination load-balancing mechanism, only one  path (either vlan 23 or vlan 13) was used, leading to only one WAN serial connections to be used. Since this was client to server traffic, and quite a lot, from time to time one WAN connection was exhausted while the other one  remains unused. You can see this, in the excerpt below:

SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100    -> 10.10.10.100   : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100    -> 10.10.10.100   : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100    -> 10.10.10.100   : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100    -> 10.10.10.100   : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100    -> 10.10.10.100   : Vlan23 (next hop 172.29.190.237)

After I enabled the per-packet load-balancing, the situation has changed since both lines to CE routers where used, leading to a equal utilization of the WAN lines:

SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan13 (next hop 172.29.190.229)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan13 (next hop 172.29.190.229)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan13 (next hop 172.29.190.229)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan23 (next hop 172.29.190.237)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan13 (next hop 172.29.190.229)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan13 (next hop 172.29.190.229)
SC#sh ip cef exact-route 10.10.20.100 10.10.10.100
10.10.20.100 -> 10.10.10.100 : Vlan23 (next hop 172.29.190.237)

I had to choose this method to force the equal usage of the WAN links and I rely on TCP stack to solve the out-of-order packets as there is not so high traffic over this interfaces.

Cisco: How to configure HSRP for load-balancing traffic

I believe many of you are already familiar with the Hot Standby Router Protocol (HSRP), but just for the one that are not I will make a short review of this protocol.
Hot Standby Router Protocol (HSRP) is a Cisco proprietary redundancy protocol for establishing a fault-tolerant default gateway, and has been described in detail in RFC 2281. The Virtual Router Redundancy Protocol (VRRP) is a standards-based alternative to HSRP defined in IETF standard RFC 3768. The two technologies are similar in concept, but not compatible.

The protocol establishes a framework between network routers in order to achieve default gateway failover if the primary gateway should become inaccessible, in close association with a rapid-converging routing protocol like EIGRP or OSPF. HSRP sends its hello messages to the multicast address 224.0.0.2 (all routers) using UDP port 1985, to other HSRP-enabled routers, defining priority between the routers. The primary router with the highest configured priority will act as a virtual router with a pre-defined gateway IP and will respond to the ARP request from machines connected to the LAN with the mac address 0000.0c07.acXX where XX is the group ID. By sharing an IP address and a MAC (Layer 2) address, two or more routers can act as a single “virtual” router. The members of the virtual router group continually exchange status messages. This way, one router can assume the routing responsibility of another, should it go out of commission for either planned or unplanned reasons. Hosts continue to forward IP packets to a consistent IP and MAC address, and the changeover of devices doing the routing is transparent. If the primary router should fail, the router with the next-highest priority would take over the gateway IP and answer ARP requests with the same mac address, thus achieving transparent default gateway fail-over.

HSRP and VRRP on some routers have the ability to trigger a failover if one or more interfaces on the router go down. This can be useful for dual branch routers each with a single serial link back to the head end. If the serial link of the primary router goes down, you would want the backup router to take over the primary functionality and thus retain connectivity to the head end.

Now, as you probably know already, HSRP is not supporting by default load-balancing, meaning that only one router can be active in the virtual router group, and only that path is used for traffic leaving the other paths unused. In this way there is a waste on bandwidth, as only one router is used to forward traffic. In normal cases, I would recommend to use another protocol named Gateway Load Balancing Protocol (GLBP), that perform the same operation as HSRP with the additional load balance feature. Anyway since we are not talking about GLBP here, and load balance with HSRP can be a subject for some Cisco exams, read below how you can achieve this feature.

First please have a look at the topology used for this example. This will make things more clear for you. As you can see R1 and R2 are connected to the same network segment, so they can share the same subnet. Let configure R1 and R2 for a basic HSRP (without load balancing):

R1
interface FastEthernet0/0
ip address 10.10.12.1 255.255.255.0
standby 1 preempt
standby 1 ip 10.10.12.3
standby 1 priority 110

R2
interface FastEthernet0/0
ip address 10.10.12.2 255.255.255.0
standby 1 preempt
standby 1 ip 10.10.12.3

R1 is the active router for group 1 (priority 110, default 100), so all the traffic will flow through R1’s path. Following I will apply the configuration to migrate this default HSRP to Multigroup HSRP (MHSRP) which is load balance aware:

R1
interface FastEthernet0/0
ip address 10.10.12.1 255.255.255.0
standby 1 preempt
standby 1 ip 10.10.12.3
standby 1 priority 110
standby 2 preempt
standby 2 ip 10.10.12.4

R2
interface FastEthernet0/0
ip address 10.10.12.2 255.255.255.0
standby 1 preempt
standby 1 ip 10.10.12.3
standby 2 preempt
standby 2 ip 10.10.12.4
standby 2 priority 110

Now we have group 1 with R1 active (10.10.12.3) and group 2 with R2 active (10.10.12.4). Of course you will have to find a way to push to the clients the 2 gateways (10.10.12.3 and 10.10.12.4) or to configure them manually on your users machines, to really achieve the load balance feature with HSRP.

To see the live presentation of how MHSRP works please click on the image below:

Cisco HSRP

Files needed for this tutorial: The topology

7 reasons why MPLS has been wildly successful

The IETF Thursday threw a birthday party for one of its most successful standards: Multi-Protocol Label Switching.

The Internet’s leading standards body hosted a panel discussion outlining the reasons why the 12-year-old protocol has been so widely deployed and such a big moneymaker for carriers.

“MPLS is one of our wildly successful protocol suites,” said Loa Andersson, co-chair of the IETF’s MPLS Working Group and the principal networking architect at the Swedish Research Institute, Acreo AB. Andersson served as moderator for the panel, which was hosted by the Internet Architecture Board, a sister organization to the IETF.

Here are the seven reasons why MPLS has proven so popular:

1. MPLS embraced IP

2. MPLS is flexible

3. MPLS is protocol neutral

4. MPLS is pragmatic

5. MPLS is adaptable

6. MPLS supports metrics

7. MPLS scales

Read the full article on NetworkWorld.com

Cisco: How to configure simple IP SLA monitor

Before we begin let’s see what is this SLA term, for those of us who are not very familiar with the Service Provider terms. IP Service Level Agreements (SLAs) enable customers to assure new business-critical IP applications, as well as IP services that utilize data, voice, and video, in an IP network. With Cisco IOS IP SLAs, users can verify service guarantees, increase network reliability by validating network performance, pro actively identify network issues assure an easy way to deploy new IP services. Cisco IOS IP SLAs use active monitoring, enabling the measurement of network performance and health.

For the following how-to please have a quick look into the topology. As you can see I have a basic routing topology, imported from another tutorial from FirstDigest, and let’s assume that we want to monitor the line between R1 and TEST-RT. For this we will configure a very simple IP SLA monitor, based on icmp echo packets, which will measure our RTT (Round Trip Time) or latency and provide us with valuable informations. For example in case of VoIP problems we can check the latency and in case of a value bigger than 200 ms (220 ms maximum accepted for the voice service to function properly) we will know from where are the problems generated.  Of course IP SLA can have more complex configuration under Cisco IOS (e.g. http or ftp transfer to check if the service provider assure us the bandwidth specified in the contract).

One personal advice from my experience. Even if all the data and information provided by IOS IP SLA monitor can be checked with “show…” commands, I would advice you to get a third party software that can interpret this data for you and draw nice graphs or store them in an archive for you. This kind of software are MRTG, Weathermap, Nagios, RRDtool and others (I put here only the free ones).

Please check the how-to by clicking the image below:

IP SLA monitor