Cisco: Prioritize Voice traffic with LLQ



In one of my previous posts I was explaining how to mark packets closer to network edge. Starting from that point, we are sure the packets are market with the correct value, so on the router device we can directly match those packets and prioritize using Low Latency Queueing.

I believe you already know why queueing is so important for Voice packet especially, but also for all other kind of real time protocol (e.g. Video over IP), but just a small reminder. Most of the interfaces are using FIFO method for queuing. This is the most basic queue method and as you probably know means First In First Out. In human terms, first packet how arrive on the interface will be send first. Nothing wrong with this theory until this point and I can assure you that most of the time you don’t have to do anything to improve this technique. But what if you have real time protocols (e.g. voip services) and data transfer over the same physical interface? With FIFO the packets are sent out the interface as they arrive, but this is not very good for the delay sensitive traffic like voice. If a TCP packet in HTTP flow can wait it’s turn to be sent out, with not visible impact for user, than a delayed voice packet will cause deprecation in voice call.

With this problems need to be solved we arrive at LLQ, which is an ehanced version of Priority Queueing (PQ) in a Class-Based Weighted Fair Queueing (CBWFQ).

Before we start let’s have a look to the topology we will use (the same like in Cisco: Mark voice packets at the network edge post):

After marking the packets on the Access Switch,now we want to prioritize voice packets on the core router:

1) Match packets market with EF in a class-map

class-map VOICE
match dscp 46

2) Configure a policy-map unde which you match the traffic in the class-map VOICE and enable LLQ. The parameter “priority” is the one telling policy-map to enable priority queueing under that class. The value after the “priority” keyword can be a value in kbps or percentage from the total bandwidth. In the example below I assume that I have a 10Mbps bandwidth and I’ll configure LLQ class to use 10% from it, meaning 1000kbps

policy-map MYPOLICY
class VOICE
priority 1000

or with percentage

policy-map MYPOLICY
class VOICE
priority percent 10

I have to tell you that after the bandwidth or percent value you can add a burst value in bytes. If you don’t add this value, it will be calculated automatically. I chose this method when I’m doing simple config, but if you want to fine tune the values you can calculate it yourself and add it. Be careful that a higher value will influence the Tc value in the process.

3) Apply the policy to the WAN interface of the Core router (I assumed that the Core router is your direct connection to provider backbone) direction outbound. You cannot apply this type of queueing direction inbound. Keep this in mind.

interface s0/0
service-policy output MYPOLICY

If you insist on applying it inbound, you’ll get an error message:

Core(config-if)#service-policy input MYPOLICY
Low Latency Queueing feature not supported in input policy.

To check that your queueing policy is applied:

show policy-map interface s0/0

Service-policy output: MYPOLICY

queue stats for all priority classes:

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0

Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: EF
Priority: 10% (1000 kbps), burst bytes 25000, b/w exceed drops: 0

Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0

Cisco tips: Track down communication issues – Part 2

In the 1st Part of this series, I’ve described the most common steps that you should follow to troubleshoot a total lack of communication between a Layer 2 device (Cisco switch) and an end user connected device. As I promised here is the second part, in which I’ll try to show you what you can check when you have no problem with connection, but still you encounter a degradation in service. By this degraded service, I understand a scenario when you have packet loss for example, or intermitent connection which will affect communication and more than sure will make user users not very happy.

We will stick with the same scenario when a end user device is connected to a Cisco switch. Remember that until now, we just troubleshoot at the Layer 1 and Layer 2. Today we will stick in the same area, so nothing directly related to IP, routing protocols or complex networking environment.

Scenario 2: You have an end device connected to a switch and you have degraded communication

a) Check for errors on the interface:


In this example there is no errors, but if you find something there, you may want to keep an eye on this port. Try to issue the above command couple of times to see if the errors are increasing in real time, as this is the worst case possible and you should take action immediately. Error on the interface can be caused by faulty interface on the switch or on the other end, ethernet cable issue or wrong configuration

b) Check the interface queue and drop packets


Interface queue is very important and you should check it during your troubleshooting process. With the above command you can see how many packets are in the input / output queue, which is the transmit and receive rate and very important if you have packets dropped from input and output queue. Usually this happens when there is a lot of load on the switch and it cannot process as quick as it’s needed all the packets. This lead us to the next step.

c) Check the CPU load on the switch


The command output is longer but most interesting for this example are the first 2 rows which show load in 60 seconds and in 60 minutes. If you have there peaks up to 100, then it’s bad and the device is having some issues that need to be fixed.

d) Identify what process is keeping the CPU busy


Most of the time, this is easy to read and to see what process is taking all your CPU power. When you see there Fifo Error Detection with 100% than you have to think that maybe there is something wrong with the queue on one of the interfaces and try to find which one is having problem. This is not straighforward and you have to check a lot of things, but can be helpful. To be honest, I see a lot of engineers just reloading the device and then problem is solved (if it was due to a hardware issue and not a configuration mistake).

e) Check for memory issue on the switch


Again, if you run out of memory, bad things can happend to your device and as well to the communication with device connected to the switch. Reloading of the device solved about 90% of this kind of problems. I don’t recommend just unplug the power cable as soon as you see a memory problem. First have a look, maybe there is something you can fix without reloading the device.

f) Check for problems with storm-control implementation


In one of previous posts I have explained how you can use storm-control to limit the available bandwidth on a Cisco switch interface. In the example above I set this bandwidth to 1 % from the available one gigabit (I know is stupid, but imagine a typo mistake). Imagine what effect will this have on the traffic. Everything above 1 % is keeped in the queue until this is full and then silent discarded.

e) As a general rule, have a look into the logs (maybe this should be first step!)

If there are a lot of Spanning-tree reconfiguration, interface flapping or anything else that looks suspicious, be sure to check on this as you can find there the root cause for your problems.

Do you have any other tips in regard to this topic? Anything else you check and can be added here? Be sure to comment below and your suggestion will be taken into consideration.