Cisco QoS at-a-glance

Stephan, a  colleague of mine,  found the following documents digging through multiple pages of Cisco.com. The documents present a nice view of different QoS approaches and the most  important information. Somehow like “cheatsheets”. They were helpful to us when need to implement QoS in some parts of the network that we administer. I hope they will help you as well.

Maybe you’re wondering why I’m adding them here, since the documents are already somewhere in Cisco.com. As you probably know, Cisco has constantly changing their website in the last months and a lot of documentation is misplaced in the Cisco.com sitemap. We already had problems finding all links, so I said why not share it here as they are already public made by Cisco.

You’ll find a Download button under each document, for PDF version and at the end of this post there is a Link to download all documents in an archive. If somebody needs only one document and has a poor Internet connection why to force them to download the full archive.

Cisco's Campus QoS Design
Cisco – Campus QoS Design

Cisco's Branch QoS Design
Cisco – Branch QoS Design

Cisco IPv6 QoS

Cisco – IPv6 QoS

 Cisco's QoS Best Practices

Cisco – QoS Best Practices

Cisco QoS Design for IPsec VPNs

Cisco – QoS Design for IPsec VPNs

Cisco's QoS Design For MPLS VPN Service Providers

Cisco – QoS Design for MPLS VPN Service Providers

QoS Strategy for DoS Worm Attack Mitigation

Cisco – Scavenger class – QoS Strategy for DoS Worm Attack

Cisco's QoS Design for MPLS VPN Subscribers

Cisco – QoS Design for MPLS VPN Subscribers

QoS Baseline

Cisco – QoS-Baseline

Cisco's WAN QoS Design

Cisco – WAN QoS Design

As said in the beginning, if you’d prefer, you download all QoS graphs in one archive.

Let me know your opinions on the above approach on QoS from Cisco. Is is accurate? Do you apply them in your organization weather for Campus, WAN, VPN or even Security?

Cisco: Port-channel load-balancing explanation [Part II]

As I promised in Part I of this article, here is the second part covering the port-channel load-balancing method explanation. If you didn’t know what I’m talking about, please be sure that you have read the first part of this article. Everything remains the same in this scenario. We have 3 physical interfaces bundled in one port-channel. Together with this port-channel we have some possible sources and destinations.

In this Part II, I will try to explain the remaining 6 methods of port-channel load-balance:

src-ip
src-mac
src-port
dst-mixed-ip-port
src-mixed-ip-port
src-dst-mixed-ip-port



src-ip / src-mac / src-port

I’ve grouped this 3 methods under one example as the basic principle is the same; Loads distribution on the source IP address / mac-address / port. Ignore completely the destination IP address / mac-address / port.

In the above case all traffic from Source A (depending on the method, this can be IP address A / mac-address A / port A)  is forward through physical interface Fa0/1 in Port-Channel 1, not matter of its destination. Fa0/2 and Fa0/3 are not an alternative in this load-balance methods.

dst-mixed-ip-port

Loads distribution on the destination IP address and TCP or UDP port

This method offer more granularity and we see that we start to have a more complex scenario as this process take into consideration a mix of IP address and TCP/ UDP port. We may have the following scenarios for traffic load-balance:

– from Src A packets to  Dst A and port 80 – Fa0/1 in PO1 – valid alternative
– from Src A packets to the same Dst A, but port 25 – Fa0/2 in PO1 – also valid, because the IP address is the same, but TCP port is different
– from Src A packes to Dst B, same port 25 – Fa0/2 in Po1 – valid as is the same port (25), but different IP address
– from Src A packets to Dst B port 25 – Fa0/3 in Po1 – not valid, as there is already a path through Fa0/2 for the packets matching this destination and port

src-mixed-ip-port

Loads distribution on the source IP address and the TCP or UDP port

This method offer also a great granularity in load-balance process. If we analyze the port communication trend, I would say that this method offer more granularity than the dst-mixed-ip-port one, because source ports are more random chosen in communication than destination port. Here we have the following scenarios:

– Src A and source port 32343 to Dst A – Fa0/1 in PO1 – valid choice
– Src A and source port 32345 to Dst A – Fa0/2 in PO1 – valid choice (same source IP, different source port)
– Src A and source port 32346 to Dst B – Fa0/2 in PO1 – valid choice (same source IP, different source port than previous example); you might think that also different destination, but in this method, destination IP or port are not taken into consideration.
– Src A and source port 32346 to Dst C – Fa0/3 in PO1 – not valid choice as the path for this source IP / source port is already defined through Fa0/2

src-dst-mixed-ip-port

Loads distribution on the source XOR-destination IP address and the TCP or UDP port

The best granularity until now. Almost every path in PO1 is a valid choice. You can just image that a path which is consider not valid is if you have a pair of  SRC IP : PORT -> DST IP: PORT and is already forwarded through one port (Fa0/2), then the Fa0/3 is not a valid choice for the same traffic. Otherwise, there are more possibilities to load balance traffic than in the previous methods. The issue is that not all devices support these last methods (especially the last 3), so if you’re device is capable to support this complex method you have to deal with the other ones and choose the best one for your scenario.

I want to close this article by adding a new load-balance method:

port-channel load-balance mpls

This method set the load-distribution method among the ports in the bundle for Multiprotocol Label Switching (MPLS) packets, use the port-channel load-balance mpls command in global configuration mode. I never had the chance to work with this method, so I don’t know how it’s working exactly (just in theory). If anybody has experience with it, I would be glad to add it’s explanation here (with credits of course).Otherwise you’ll have to wait until I get my hands on this configuration and then I’ll share my knowledge with you.

Cisco: Port-channel load-balancing explanation [Part I]

Port-channel (or etherchannel) is a great way to increase the transport capacity between 2 switches or between a switch and an end device that suport load balancing (e.g. server). Today I don’t want to focus on how the Port-channel are configured, but more on how they load-balance the traffic over the multiple interfaces included in a bundle.

To configure the port-channel load balance, you have to be in the config mode and issue:

port-channel load-balance method

or

port-channel load-balance method module slot

the method can be one of the following:

dst-ip
dst-mac
dst-port
src-dst-ip
src-dst-mac
src-dst-port
src-ip
src-mac
src-port
dst-mixed-ip-port
src-dst-mixed-ip-port
src-mixed-ip-port

Not every Cisco device support all this methods, due to hardware / IOS restrictions. Today I plan to explain the first six of this methods and in a future post the next six ones.
The understanding of the port-channel load-balance methods is not a difficult topic, rather a tricky one. That’s why I draw some basic scenarios to help you remember better each load-balance method.

I have to admit that for me was a little bit hard to remember them and how they work, so if I’ll make any mistake or you don’t understand something, please let me know in comments or by contact form.

dst-ip

Loads distribution on the destination IP address. It does not take into consideration the source of the packets, but only the IP destination. If the destination is the same, the packet will be forwarded over the same wire (port Fa0/1 in the image below):

As you can see the packet from Src to Dst IP A, will always take the path through port Fa0/1 in Port-channel 1. In this load-balancing mode, paths through ports Fa0/2 and Fa0/3 are not a valid choice.

dst-mac

Loads distribution on the destination MAC address. If destination mac-address is reachable only through one of the three ports bundled in the port-channel, then only that interface is considered a path for all the packets sent to the same mac destination:

In the image above mac-address A is reachable through port Fa0/1, so that port is considered as an acceptable path.

dst-port

By keyword “port” we understand here TCP or UDP port and not physical interface. Communication for the same port (e.g. port 80) will be load-balanced over one single physical port (Fa0/1)  in port-channel 1. Data transfer for another port (e.g. port 25) will be directed through another interface (Fa0/2) and so on, in a round-robin manner.

src-dst-ip

Loads distribution on the source transfer or XOR-destination IP address. What this method does is to pair source and destination IP address and send all the traffic that match this rule over one physical port in port-channel. The advantage of this method over the above one is granularity. With the above method the traffic to one destination, was sent over one physical port in port-channel without taking into consideration the source. With this method, the packets to the same destination can be forwarded over different interfaces in port-channel if they are coming from different IP source.

In the example above, you see that packets with source A and destination A are fowarded over port Fa0/1. for the same destination A, but source B, a different path is taken over Fa0/2. However from the same source B to the same destination A, Fa0/3 is not a reliable path, as the traffic for this pair is already forwarded over Fa0/2.

src-dst-mac

Here we have the same principle  like in the src-dst-ip, with the difference that mac-address is considered as the load-balancing element instead of IP address:

src-dst-port

Loads distribution on the source XOR-destination port. Port (UDP and TCP protocol) is the element taking into consideration when load-balancing in port-channel:

Even if this may look like the src-dst-mac and src-dst-ip methods actually it is different. You see above that it’s possible to load balance peer-to-peer communication (pc-to-pc if you want) if this communication is taking place over different ports. In the picture above we have port 25 and port 80 communication involving 2 machines. You can see that the traffic from source A to destination port 80 is taking port Fa0/1. From the same source to destination port 25 is taking path through interface Fa0/2. However, the same communication cannot be sent over interface Fa0/3 (as it’s already sent over Fa0/2 as part of the load-balance process). This method add even more granularity.

Come back for the next six methods of port-channel load-balance feature.

Cisco: Speed vs Bandwidth interface command

It’s not uncommon to see people making mistake about the two interface commands speed and bandwidth. Most of the young engineers ( and not only ) assume that bandwidth and speed have the same meaning when applied under the interface and that purpose is to reduce the throughput of the interface up to the limit specified by bandwidth or speed.

Well this cannot be more wrong. The two commands are not doing at all throughput limitation and their scope is totally different. Let’s analyze them a little bit:

Bandwidth

What is Cisco.com saying about it:

To set the inherited and received bandwidth values for an interface, use the bandwidth command in interface configuration mode. To restore the default values, use the no form of this command.

bandwidth {kbps | inherit [kbps] | receive[kbps]}

no bandwidth {kbps | inherit [kbps] | receive[kbps]}

Now for the explanation. Bandwidth command is an optional, but most of the time, recommend interface command. Despite the word, it is not there to limit the bandwidth and you cannot adjust the actual bandwidth of an interface using this command.The interface bandwidth command is used  to communicate the speed of the interface to higher level protocols. Most of the time, a routing protocol needs to know the speed of the interface so it can choose the best route. Another effect of this command is that  TCP will adjust its initial retransmission parameters based on the bandwidth configured on the interface.



I’m sure that you are familiar or at least heard about the dynamic routing protocols like OSPF and EIGRP or at least you heard of it. Now, OSPF and EIGRP in particular use the interface bandwidth to calculate metrics.  I will not go into mathematics calculation of metrics for OSPF and EIGRP, but rather explain it in human terms. Imagine that you have 100Mbps interfaces connecting two routers, with 2 connection in parallel (2 x 100Mbps interface / router)  over provider network. The provider is limiting your actual bandwidth to 10Mbps on one connection and to 1 Mbps over the other connection. You run OSPF or EIGRP over this connection.  If you don’t specified the accurate bandwidth on each connection, OSPF or EIGRP will calculate the metrics based on the default interface speed (100Mbps). From their point fo view both lines are equal, you have equal metric and you can run on the problem that the routers will push packets on both lines, but one line will have a higher throughput, so the packet will arrive at destination out-of-order.

If you specify the bandwidth command under the 100Mbps interface (bandwidth 10000 and bandwidth 1000) then IP routing protocols will sense the difference and the 10Mbps line will be more prefered than 1Mbps line.

Speed

What is Cisco.com saying about it:

To configure the speed for a Fast Ethernet or Gigabit Ethernet interface, use the speed command in interface configuration mode. To return to the default setting, use the no form of this command.

speed {10 | 100 | 1000 [negotiate] | auto [speed-list]}

no speed

Speed command explanation is actual much more simplest than bandwidth. Some interface (hardware dependent) allow you to set the speed. So, even if the interface is a 100Mbps you can set it to 10Mbps. That means that the interface is transmitting packets on up to 10Mbps.  You will ask me now, probably, OK, then if we set the interface speed to 10Mbps,  this will not tell the OSPF / EIGRP to calculate the metrics based on this value. Of course, yes. But what will you do when there is another value than standard 10Mbps or 100Mbps for the speed command. Like 1Mbps. You cannot set speed 1. Or if you have a hardware card which does not support speed slower than 1Gbps? That’s why you use the bandwidth command to tell the upper protocols the actual interface capability of throughput.

Speed command is also important, as it has to be the same on both ends of the connection. It can auto-negotiate is true, but sometimes it fail and when speed auto-negotiation fails, it default to 10Mbps half-duplex. You know what duplex is, don’t you? Well, imagine that your interfaces are 100Mbps full-duplex capable, but due to auto-negotiation failure, you will transmit at 10Mbps and half-duplex (only one packet in a direction at the time on the wire). That will be a huge drawback in your network capacity. If you have doubts about your device auto-negotiation capability, better hardcode there speed and duplex for to enjoy your night sleep.

I hope you understood the differences between speed and bandwidth interface command and I was clear in my explanation. If not, there are always comments for. Just ask me.

Cisco IOS: single user access in CLI configuration terminal


Usually big companies with large network have a dedicated department which deals with all the network configuration.  The problem that I have in mind is when this department is splitted over large geographical areas (e.g. some colleagues in Europe, some in Asia and some in America) it may happen that more than one colleague is working on the same device at the time.

This can cause overlapping configuration or other problems, due to the fact that more than one config is applied at the time causing conflicts.

There is one simple solution to avoid this problem by enabling single-user (exclusive) access functionality for the Cisco IOS command-line interface (CLI). Configuration of this feature is very simple:

1. enable

2. configure terminal

3. configuration mode exclusive {auto | manual}

4. end

As you can see mode exclusive has two options  auto or manual:

  • The auto keyword automatically locks the configuration session whenever the configure terminal command is used. This is the default.
  • The manual keyword allows you to choose to lock the configuration session manually or leave it unlocked.

I would recommend using the default auto mode, but if for some reason you need manual mode, then you need to perform some additional tasks:

1. enable

2. configure terminal lock

3. Configure the system by entering your changes to the running configuration.

4. end

The manual method allow you to be able to lock the configuration mode only when you really need it to be lock. Compared to this, the auto mode, is locking the configuration all the time, so it’s considered more safe.

When you are in configuration mode excluside (no matter if auto or manual), you are configuring something through CLI and another user connected to that device is issuing the configuration terminal command, the following message will be displayed:

Configuration mode locked exclusively by user ‘unknown’ process ’88’ from terminal ‘0’. Please try later.Rollback confirmed change timer is cancelled due to configuration lock error.

This is just an example. In your case the user, process or terminal may be different. The message is useful as the second user trying to configure the device knows what’s going on and the engineer is not left in the fog without any clue.