Cisco: Port-channel load-balancing explanation [Part I]

Port-channel (or etherchannel) is a great way to increase the transport capacity between 2 switches or between a switch and an end device that suport load balancing (e.g. server). Today I don’t want to focus on how the Port-channel are configured, but more on how they load-balance the traffic over the multiple interfaces included in a bundle.

To configure the port-channel load balance, you have to be in the config mode and issue:

port-channel load-balance method

or

port-channel load-balance method module slot

the method can be one of the following:

dst-ip
dst-mac
dst-port
src-dst-ip
src-dst-mac
src-dst-port
src-ip
src-mac
src-port
dst-mixed-ip-port
src-dst-mixed-ip-port
src-mixed-ip-port

Not every Cisco device support all this methods, due to hardware / IOS restrictions. Today I plan to explain the first six of this methods and in a future post the next six ones.
The understanding of the port-channel load-balance methods is not a difficult topic, rather a tricky one. That’s why I draw some basic scenarios to help you remember better each load-balance method.

I have to admit that for me was a little bit hard to remember them and how they work, so if I’ll make any mistake or you don’t understand something, please let me know in comments or by contact form.

dst-ip

Loads distribution on the destination IP address. It does not take into consideration the source of the packets, but only the IP destination. If the destination is the same, the packet will be forwarded over the same wire (port Fa0/1 in the image below):

As you can see the packet from Src to Dst IP A, will always take the path through port Fa0/1 in Port-channel 1. In this load-balancing mode, paths through ports Fa0/2 and Fa0/3 are not a valid choice.

dst-mac

Loads distribution on the destination MAC address. If destination mac-address is reachable only through one of the three ports bundled in the port-channel, then only that interface is considered a path for all the packets sent to the same mac destination:

In the image above mac-address A is reachable through port Fa0/1, so that port is considered as an acceptable path.

dst-port

By keyword “port” we understand here TCP or UDP port and not physical interface. Communication for the same port (e.g. port 80) will be load-balanced over one single physical port (Fa0/1)  in port-channel 1. Data transfer for another port (e.g. port 25) will be directed through another interface (Fa0/2) and so on, in a round-robin manner.

src-dst-ip

Loads distribution on the source transfer or XOR-destination IP address. What this method does is to pair source and destination IP address and send all the traffic that match this rule over one physical port in port-channel. The advantage of this method over the above one is granularity. With the above method the traffic to one destination, was sent over one physical port in port-channel without taking into consideration the source. With this method, the packets to the same destination can be forwarded over different interfaces in port-channel if they are coming from different IP source.

In the example above, you see that packets with source A and destination A are fowarded over port Fa0/1. for the same destination A, but source B, a different path is taken over Fa0/2. However from the same source B to the same destination A, Fa0/3 is not a reliable path, as the traffic for this pair is already forwarded over Fa0/2.

src-dst-mac

Here we have the same principle  like in the src-dst-ip, with the difference that mac-address is considered as the load-balancing element instead of IP address:

src-dst-port

Loads distribution on the source XOR-destination port. Port (UDP and TCP protocol) is the element taking into consideration when load-balancing in port-channel:

Even if this may look like the src-dst-mac and src-dst-ip methods actually it is different. You see above that it’s possible to load balance peer-to-peer communication (pc-to-pc if you want) if this communication is taking place over different ports. In the picture above we have port 25 and port 80 communication involving 2 machines. You can see that the traffic from source A to destination port 80 is taking port Fa0/1. From the same source to destination port 25 is taking path through interface Fa0/2. However, the same communication cannot be sent over interface Fa0/3 (as it’s already sent over Fa0/2 as part of the load-balance process). This method add even more granularity.

Come back for the next six methods of port-channel load-balance feature.

Cisco 2900 – Interface related chassis modification



The Cisco Enhanced EtherSwitch Service Modules seen above, expands the router’s capabilities by integrating Layer 2 and Layer 3 switching feature sets identical to those found in the Cisco Catalyst 3560-E and Catalyst 2960 Series Switches.

The new Cisco Enhanced EtherSwitch Service Modules are the first modules to take advantage of the increased capabilities on the Cisco 3900 and 2900 Series Integrated Services Routers. Additionally, these service modules enable Cisco’s industry-leading power initiatives, Cisco EnergyWise®, Cisco Enhanced Power over Ethernet (ePoE), and per-port PoE power monitoring-all of which enhance the ability of the branch office to scale to next-generation requirements and still meet important initiatives for IT teams to operate a power efficient network.

Furthermore, the Cisco Enhanced EtherSwitch Service Modules not only perform local line-rate switching and routing but also support direct service module-to-service module communication through the Integrated Services Router Generation 2 multigigabit fabric (MGF) which separates LAN traffic from WAN resources.

Below, you have a hands on demonstration how to add, remove or replace a module in the new Cisco 2900 chassis and what to is recommended to do or avoid during this operations. This is nothing new for the engineers that have to change modules everyday in chassis like 6500 or 7600 platform, but may be very useful for the beginners.

Enjoy!

Cisco: Speed vs Bandwidth interface command

It’s not uncommon to see people making mistake about the two interface commands speed and bandwidth. Most of the young engineers ( and not only ) assume that bandwidth and speed have the same meaning when applied under the interface and that purpose is to reduce the throughput of the interface up to the limit specified by bandwidth or speed.

Well this cannot be more wrong. The two commands are not doing at all throughput limitation and their scope is totally different. Let’s analyze them a little bit:

Bandwidth

What is Cisco.com saying about it:

To set the inherited and received bandwidth values for an interface, use the bandwidth command in interface configuration mode. To restore the default values, use the no form of this command.

bandwidth {kbps | inherit [kbps] | receive[kbps]}

no bandwidth {kbps | inherit [kbps] | receive[kbps]}

Now for the explanation. Bandwidth command is an optional, but most of the time, recommend interface command. Despite the word, it is not there to limit the bandwidth and you cannot adjust the actual bandwidth of an interface using this command.The interface bandwidth command is used  to communicate the speed of the interface to higher level protocols. Most of the time, a routing protocol needs to know the speed of the interface so it can choose the best route. Another effect of this command is that  TCP will adjust its initial retransmission parameters based on the bandwidth configured on the interface.



I’m sure that you are familiar or at least heard about the dynamic routing protocols like OSPF and EIGRP or at least you heard of it. Now, OSPF and EIGRP in particular use the interface bandwidth to calculate metrics.  I will not go into mathematics calculation of metrics for OSPF and EIGRP, but rather explain it in human terms. Imagine that you have 100Mbps interfaces connecting two routers, with 2 connection in parallel (2 x 100Mbps interface / router)  over provider network. The provider is limiting your actual bandwidth to 10Mbps on one connection and to 1 Mbps over the other connection. You run OSPF or EIGRP over this connection.  If you don’t specified the accurate bandwidth on each connection, OSPF or EIGRP will calculate the metrics based on the default interface speed (100Mbps). From their point fo view both lines are equal, you have equal metric and you can run on the problem that the routers will push packets on both lines, but one line will have a higher throughput, so the packet will arrive at destination out-of-order.

If you specify the bandwidth command under the 100Mbps interface (bandwidth 10000 and bandwidth 1000) then IP routing protocols will sense the difference and the 10Mbps line will be more prefered than 1Mbps line.

Speed

What is Cisco.com saying about it:

To configure the speed for a Fast Ethernet or Gigabit Ethernet interface, use the speed command in interface configuration mode. To return to the default setting, use the no form of this command.

speed {10 | 100 | 1000 [negotiate] | auto [speed-list]}

no speed

Speed command explanation is actual much more simplest than bandwidth. Some interface (hardware dependent) allow you to set the speed. So, even if the interface is a 100Mbps you can set it to 10Mbps. That means that the interface is transmitting packets on up to 10Mbps.  You will ask me now, probably, OK, then if we set the interface speed to 10Mbps,  this will not tell the OSPF / EIGRP to calculate the metrics based on this value. Of course, yes. But what will you do when there is another value than standard 10Mbps or 100Mbps for the speed command. Like 1Mbps. You cannot set speed 1. Or if you have a hardware card which does not support speed slower than 1Gbps? That’s why you use the bandwidth command to tell the upper protocols the actual interface capability of throughput.

Speed command is also important, as it has to be the same on both ends of the connection. It can auto-negotiate is true, but sometimes it fail and when speed auto-negotiation fails, it default to 10Mbps half-duplex. You know what duplex is, don’t you? Well, imagine that your interfaces are 100Mbps full-duplex capable, but due to auto-negotiation failure, you will transmit at 10Mbps and half-duplex (only one packet in a direction at the time on the wire). That will be a huge drawback in your network capacity. If you have doubts about your device auto-negotiation capability, better hardcode there speed and duplex for to enjoy your night sleep.

I hope you understood the differences between speed and bandwidth interface command and I was clear in my explanation. If not, there are always comments for. Just ask me.

Cisco IOS: single user access in CLI configuration terminal


Usually big companies with large network have a dedicated department which deals with all the network configuration.  The problem that I have in mind is when this department is splitted over large geographical areas (e.g. some colleagues in Europe, some in Asia and some in America) it may happen that more than one colleague is working on the same device at the time.

This can cause overlapping configuration or other problems, due to the fact that more than one config is applied at the time causing conflicts.

There is one simple solution to avoid this problem by enabling single-user (exclusive) access functionality for the Cisco IOS command-line interface (CLI). Configuration of this feature is very simple:

1. enable

2. configure terminal

3. configuration mode exclusive {auto | manual}

4. end

As you can see mode exclusive has two options  auto or manual:

  • The auto keyword automatically locks the configuration session whenever the configure terminal command is used. This is the default.
  • The manual keyword allows you to choose to lock the configuration session manually or leave it unlocked.

I would recommend using the default auto mode, but if for some reason you need manual mode, then you need to perform some additional tasks:

1. enable

2. configure terminal lock

3. Configure the system by entering your changes to the running configuration.

4. end

The manual method allow you to be able to lock the configuration mode only when you really need it to be lock. Compared to this, the auto mode, is locking the configuration all the time, so it’s considered more safe.

When you are in configuration mode excluside (no matter if auto or manual), you are configuring something through CLI and another user connected to that device is issuing the configuration terminal command, the following message will be displayed:

Configuration mode locked exclusively by user ‘unknown’ process ’88’ from terminal ‘0’. Please try later.Rollback confirmed change timer is cancelled due to configuration lock error.

This is just an example. In your case the user, process or terminal may be different. The message is useful as the second user trying to configure the device knows what’s going on and the engineer is not left in the fog without any clue.

Cisco: IP Policy Routing with IP SLA and EEM

Considering the same environment like in the post Cisco: Policy Routing with IP SLA, there is another way to achieve the same behavior using again IP SLA and EEM (Embedded Event Manager).

For those of you who are not so familiar with EEM please read http://www.cisco.com/en/US/products/ps6815/products_ios_protocol_group_home.html. You will find a nice explanation and some examples how to use EEM to achieve the desired result.

Now, going back to our example, please conside the same topology like in the previous post:

We start by configuring again the IP SLA (explanation in this post):

ip sla 5
icmp-echo 172.82.100.1 source-interface GigabitEthernet0/0
timeout 1000
frequency 2
ip sla schedule 5 life forever start-time now

We have the path measured. Instead of tracking this and applying the route based on tracking, we have now a different approach. We use EEM to check the conditions of IP SLA, and according to the result we configure the necessary IP routing. For EEM to work we need to know and Object name and the OID associated with it. In my example I will use the SNMP Object name rttMonCtrlOperTimeoutOccurred with OID value: 1.3.6.1.4.1.9.9.42.1.2.9.1.6

According to Cisco’s explanation “This object is set to true when an operation times out, and set to false when an operation completes under rttMonCtrlAdminTimeout. When this value changes, a reaction may occur, as defined by rttMonReactAdminTimeoutEnable

As a summary, we will check the IP SLA with EEM using a certain SNMP Object. When a change occur in the monitored IP SLA, EEM will apply a certain configuration defined by us:

event manager applet IP-SLA-5-TIMEOUT
event snmp oid 1.3.6.1.4.1.9.9.42.1.2.9.1.6.5 get-type exact entry-op eq entry-val 1 exit-op eq exit-val 2 poll-interval 5
action 1.0 syslog msg “172.82.100.1 not reachable – primary line NOK”
action 1.1 cli command “enable”
action 1.2 cli command “configure terminal”
action 1.3 cli command “ip route 0.0.0.0 0.0.0.0 10.10.10.1”

EEM is based on a SNMP event. It is monitoring the OID value explained above. You may notice that at the end of the OID value, has been added another value .5 This is important as it defines the relation between EEM and IP SLA. In my case this number is 5, as the IP SLA session is defined, but in your case it may be different. This is checking if the TruthValue is 1 (true) or 2(false), on a 5 second interval and it’s applying the defined configuration. The EEM triggers on value 1 (true), so when the timeout occurs in IP SLA.

You might wonder, what will happen when the primary line is working. Well nothing in this conditions, because EEM is not configure for the case when the primary line is OK. In other words, EEM will not retract the backup default IP route. For this we need another EEM to be configured with a small modification:

event manager applet IP-SLA-5-OK
event snmp oid 1.3.6.1.4.1.9.9.42.1.2.9.1.6.5 get-type exact entry-op eq entry-val 2 exit-op eq exit-val 1 poll-interval 5
action 1.0 syslog msg “172.82.100.1 is reachable – primary line OK”
action 1.1 cli command “enable”
action 1.2 cli command “configure terminal”
action 1.3 cli command “no ip route 0.0.0.0 0.0.0.0 10.10.10.1”

Now the EEM is triggered on the value 2 (false), so when no timeout occurs in IP SLA.

You might be interested in another EEM configuration, which send an e-mail notification when a certain condition occur. Check it here.