Cisco: Port-channel load-balancing explanation [Part II]

As I promised in Part I of this article, here is the second part covering the port-channel load-balancing method explanation. If you didn’t know what I’m talking about, please be sure that you have read the first part of this article. Everything remains the same in this scenario. We have 3 physical interfaces bundled in one port-channel. Together with this port-channel we have some possible sources and destinations.

In this Part II, I will try to explain the remaining 6 methods of port-channel load-balance:

src-ip
src-mac
src-port
dst-mixed-ip-port
src-mixed-ip-port
src-dst-mixed-ip-port



src-ip / src-mac / src-port

I’ve grouped this 3 methods under one example as the basic principle is the same; Loads distribution on the source IP address / mac-address / port. Ignore completely the destination IP address / mac-address / port.

In the above case all traffic from Source A (depending on the method, this can be IP address A / mac-address A / port A)  is forward through physical interface Fa0/1 in Port-Channel 1, not matter of its destination. Fa0/2 and Fa0/3 are not an alternative in this load-balance methods.

dst-mixed-ip-port

Loads distribution on the destination IP address and TCP or UDP port

This method offer more granularity and we see that we start to have a more complex scenario as this process take into consideration a mix of IP address and TCP/ UDP port. We may have the following scenarios for traffic load-balance:

– from Src A packets to  Dst A and port 80 – Fa0/1 in PO1 – valid alternative
– from Src A packets to the same Dst A, but port 25 – Fa0/2 in PO1 – also valid, because the IP address is the same, but TCP port is different
– from Src A packes to Dst B, same port 25 – Fa0/2 in Po1 – valid as is the same port (25), but different IP address
– from Src A packets to Dst B port 25 – Fa0/3 in Po1 – not valid, as there is already a path through Fa0/2 for the packets matching this destination and port

src-mixed-ip-port

Loads distribution on the source IP address and the TCP or UDP port

This method offer also a great granularity in load-balance process. If we analyze the port communication trend, I would say that this method offer more granularity than the dst-mixed-ip-port one, because source ports are more random chosen in communication than destination port. Here we have the following scenarios:

– Src A and source port 32343 to Dst A – Fa0/1 in PO1 – valid choice
– Src A and source port 32345 to Dst A – Fa0/2 in PO1 – valid choice (same source IP, different source port)
– Src A and source port 32346 to Dst B – Fa0/2 in PO1 – valid choice (same source IP, different source port than previous example); you might think that also different destination, but in this method, destination IP or port are not taken into consideration.
– Src A and source port 32346 to Dst C – Fa0/3 in PO1 – not valid choice as the path for this source IP / source port is already defined through Fa0/2

src-dst-mixed-ip-port

Loads distribution on the source XOR-destination IP address and the TCP or UDP port

The best granularity until now. Almost every path in PO1 is a valid choice. You can just image that a path which is consider not valid is if you have a pair of  SRC IP : PORT -> DST IP: PORT and is already forwarded through one port (Fa0/2), then the Fa0/3 is not a valid choice for the same traffic. Otherwise, there are more possibilities to load balance traffic than in the previous methods. The issue is that not all devices support these last methods (especially the last 3), so if you’re device is capable to support this complex method you have to deal with the other ones and choose the best one for your scenario.

I want to close this article by adding a new load-balance method:

port-channel load-balance mpls

This method set the load-distribution method among the ports in the bundle for Multiprotocol Label Switching (MPLS) packets, use the port-channel load-balance mpls command in global configuration mode. I never had the chance to work with this method, so I don’t know how it’s working exactly (just in theory). If anybody has experience with it, I would be glad to add it’s explanation here (with credits of course).Otherwise you’ll have to wait until I get my hands on this configuration and then I’ll share my knowledge with you.

Virtual WAN Optimization – Blue Coat presentation

Chris Webber from Blue Coat Systems describe the concept of virtualing WAN Optimization and WAN Acceleration systems. Of course that, since Blue Coat Systems is involved, you can consider this video presentation a little bit of marketing strategy, but if you think to this subject, all companies out there do the same. It’s somehow normal.

Skipping the marketing part, this is a good explanation about virtualized WAN Optimization and you can have an overall view of what this means and how it can be implemented. Information is always welcome, not matter from which source, so I would recommend you to spend 10 minutes and watch this video.


Brought to you by NetworkWorld.tv and FirstDigest

The difference between 3G and 4G

2diggsdigg

Excellent explanation about what is 3G and 4G, speed of the download  and different generation of wireless technologies by Craig Mathias.


Brought to you by NetworkWorld.tv and FirstDigest

Cisco: How can MSS help to solve issues in VPN communication

Since a week, I’m stretching my brains to solve a communication problem over a VPN connection. The problem was that connections like SSH over VPN were not successfully completed. Imagine site A (Paris – remote end) and site B (Hamburg – local end).

In the back, of this sites, servers and clients. If somebody tried to connect from a client in site A over SSH to a server in site B, the initial authentication protocol was successful, but as soon as a command was typed on the terminal (like ls -la or ps aux) and the server had to return a bunch of results, the SSH console was completely stuck

Immediately I was thinking that this has to do with MTU size (default 1500 bytes on each site) and the DF (Don’t fragment) bit being set. I have tested SSH over Windows and Linux machine and all the time the DF bit was set ON:


I think it is obvious why I did paint over IP addresses. Interesting part is that only SSH was having this DF bit set. I’ve tried also FTP or regular ping, and there, I could not see the DF bit on.

You image what problems I had. MTU being 1500 Bytes and DF bit on, the packet was not fragmented. With the IPsec VPN overhead, the packet could end up to 1604 bytes (together with point-to-point GRE overhead), so most of the packets got dropped.

If I would decrease the MTU size on the client / server interface to let’s say 1300 bytes, then everything was working fine. However, this was not a scalable solution when you have more clients and servers, so to prevent future issues, I keep on looking for another solution. The salvation came from “ip tcp mss-adjust” command.

You probably know what MSS is, but here is a small description. The maximum segment size (MSS) is an option of the TCP protocol that specifies the largest amount of data, specified in bytes, that a computer or communications device can receive in a single, unfragmented piece. It does not count the TCP header or the IP header. For optimum communications, the number of bytes in the data segment and the headers must not add up to more than the number of bytes in the maximum transmission unit (MTU).

This Maximum Segment Size (MSS) announcement (often mistakenly called a negotiation) is sent from the data receiver to the data sender and says “I can accept TCP segments up to size “X bytes”.

Typically a host bases its MSS value on its outgoing interface’s maximum transmission unit (MTU) size because if the MSS value is set too low, the result is an inefficient use of bandwidth; More packets are required to transmit the data. An MSS value that is set too high could result in an IP datagram that is too large to send and that must be fragmented. In my case if the MTU was 1500, the MSS was also 1500 bytes.  This value didn’t help me at all, due to the issues that I’ve mentioned above.

So, instead of changing every device MTU size to a lower value than 1500 bytes, I’ve decided to change the MSS to a reasonable value that would solve my problem. On Cisco devices, you can configure the MSS in a few straightforward steps:

# configure terminal
# interface FastEthernet 0/0
# ip tcp adjust-mss 1300

The 1300 Bytes is just an example in my case. This value can be something between 500 and 1460 bytes.

Next time when you have problems with connections over VPN, try to se the MSS to a lower value and see if it’s working. If it is, then you are saved. If not…maybe your problem is not related to this topic.

Cisco: Port-channel load-balancing explanation [Part I]

Port-channel (or etherchannel) is a great way to increase the transport capacity between 2 switches or between a switch and an end device that suport load balancing (e.g. server). Today I don’t want to focus on how the Port-channel are configured, but more on how they load-balance the traffic over the multiple interfaces included in a bundle.

To configure the port-channel load balance, you have to be in the config mode and issue:

port-channel load-balance method

or

port-channel load-balance method module slot

the method can be one of the following:

dst-ip
dst-mac
dst-port
src-dst-ip
src-dst-mac
src-dst-port
src-ip
src-mac
src-port
dst-mixed-ip-port
src-dst-mixed-ip-port
src-mixed-ip-port

Not every Cisco device support all this methods, due to hardware / IOS restrictions. Today I plan to explain the first six of this methods and in a future post the next six ones.
The understanding of the port-channel load-balance methods is not a difficult topic, rather a tricky one. That’s why I draw some basic scenarios to help you remember better each load-balance method.

I have to admit that for me was a little bit hard to remember them and how they work, so if I’ll make any mistake or you don’t understand something, please let me know in comments or by contact form.

dst-ip

Loads distribution on the destination IP address. It does not take into consideration the source of the packets, but only the IP destination. If the destination is the same, the packet will be forwarded over the same wire (port Fa0/1 in the image below):

As you can see the packet from Src to Dst IP A, will always take the path through port Fa0/1 in Port-channel 1. In this load-balancing mode, paths through ports Fa0/2 and Fa0/3 are not a valid choice.

dst-mac

Loads distribution on the destination MAC address. If destination mac-address is reachable only through one of the three ports bundled in the port-channel, then only that interface is considered a path for all the packets sent to the same mac destination:

In the image above mac-address A is reachable through port Fa0/1, so that port is considered as an acceptable path.

dst-port

By keyword “port” we understand here TCP or UDP port and not physical interface. Communication for the same port (e.g. port 80) will be load-balanced over one single physical port (Fa0/1)  in port-channel 1. Data transfer for another port (e.g. port 25) will be directed through another interface (Fa0/2) and so on, in a round-robin manner.

src-dst-ip

Loads distribution on the source transfer or XOR-destination IP address. What this method does is to pair source and destination IP address and send all the traffic that match this rule over one physical port in port-channel. The advantage of this method over the above one is granularity. With the above method the traffic to one destination, was sent over one physical port in port-channel without taking into consideration the source. With this method, the packets to the same destination can be forwarded over different interfaces in port-channel if they are coming from different IP source.

In the example above, you see that packets with source A and destination A are fowarded over port Fa0/1. for the same destination A, but source B, a different path is taken over Fa0/2. However from the same source B to the same destination A, Fa0/3 is not a reliable path, as the traffic for this pair is already forwarded over Fa0/2.

src-dst-mac

Here we have the same principle  like in the src-dst-ip, with the difference that mac-address is considered as the load-balancing element instead of IP address:

src-dst-port

Loads distribution on the source XOR-destination port. Port (UDP and TCP protocol) is the element taking into consideration when load-balancing in port-channel:

Even if this may look like the src-dst-mac and src-dst-ip methods actually it is different. You see above that it’s possible to load balance peer-to-peer communication (pc-to-pc if you want) if this communication is taking place over different ports. In the picture above we have port 25 and port 80 communication involving 2 machines. You can see that the traffic from source A to destination port 80 is taking port Fa0/1. From the same source to destination port 25 is taking path through interface Fa0/2. However, the same communication cannot be sent over interface Fa0/3 (as it’s already sent over Fa0/2 as part of the load-balance process). This method add even more granularity.

Come back for the next six methods of port-channel load-balance feature.

How to connect Vyatta to Cisco using VirtualBox and GNS3

Vyatta is  a software-based, open-source, network operating system that is portable to standard x86 hardware as well as common virtualization and cloud computing platforms. By deploying Vyatta, users benefit from a flexible enterprise-class routing and security feature set capable of scaling from DSL to 20Gbps performance at a fraction of the cost of proprietary solutions.”

Shortly, you take this piece of software, install it on a x86 machine (any decent PC with quality network card will do) and have yourself a network device capable to support dynamic routing protocol, policy routing, QoS and many more features. The best part (at least for guys like me) is that Vyatta is offering a free package that you can download on their website. This free version is without the commercial support, but you can find plenty support in the Vyatta community.



Why I would be interested in Vyatta when I can have Cisco devices and can emulate Cisco IOS with GNS3? I don’t have an evolved response to this question, but
mainly because I was bored and wanted to try something new, but still related to Cisco. I arrived to the conclusion that I should test how Cisco can interact with 3rd party devices. I choose Vyatta as 3rd party device, because it is a turn key network solution. Of course, you can take all the included software in Vyatta and build your own box based on whatever Linux distribution you want, but why to do this if you have a free solution which already works?

I plan to test more about Vyatta and Cisco integration, but for today,  I want show you how to install Vyatta and connect it with GNS3. First you need to download the image from Vyatta and build yourself a working box. You can download the Live CD image which allow you to boot from it and then install, or an image for your virtualization system (VMware of Citrix).

1. Create a Virtual Machine where to install Vyatta system

From my test environment I chose to create a Virtual Machine using VirtualBox with the following settings:

Minimum settings are Memory 512MB and Storage 2GB. The rest of the settings are optional, but if you would like to test some network stuff, then I recommend at leas 1 Network Adapter. I have 2 in this image, because one will be connected to the virtual network cloud (tap0 inteface) and the other one to the physical network, so I can access remotely this system.

2. Install Vyatta system

Download your copy of Vyatta, add the ISO image to the IDE Secondary Master (CD/DVD) and boot your virtual machine. It should read the image and boot until to arrive to a Linux style prompt that ask for username and password (vyatta / vyatta by default).

Login and install Vyatta from LiveCD. You can also work directly from LiveCD, but then the changes will not be permanent. The persistent installation can be image-based or system-based:

– Image-based install. The simplest, most flexible, and most powerful way to install a Vyatta system is using a binary system image. With this method, you can install multiple versions of the Vyatta system as images and switch between the images simply and easily. You install the image from a LiveCD, reboot your system and it runs the image.

At the command prompt type:

install-image

– Disk-based install. Installation from a LiveCD onto a persistent device such as a hard disk partition. However, unlike an image-based install, a disk-based install uses a traditional layout of files on the disk. Additional system images may be added at a later time to a system created using a disk-based install.

At the command prompt type:

install-system

To be honest, in the test environment, it doesn’t make much of a difference if you use the image or system installation. I used image as it’s the simplest one, as Vyatta recommend.

3. Connect Vyatta with GNS3

Then you need a system with GNS3 installed. I’m using the same system on which VirtualBox is installed.  The scenario for today is pretty straightforward, as I just want to demonstrate how to connect Vyatta to a Cisco device (well, an emulated one in my case):

If you don’t know how to achieve the connection above in GNS3, please read this tutorial about connection of GNS3 to VirtualBox Machines. I that post I used an Ubuntu system, instead of Vyatta, but the principle is the same.

4. Basic network configuration of Vyatta system

If you work more with Cisco, like I do, then you’ll find the configuration mode a little bit different that Cisco’s standard IOS CLI. If you work with Juniper, than this might look familliar as the configuration commands and the config files look pretty much like the Juniper ones.

By default no remote access is enabled, so you’ll have to access this device over console. In VirtualBox case, you’ll have a tab there with Console. If you access it, you’ll be able to login  to Vyatta system and configure it.

4.a) Check the config file to have an idea about what you have already configured by default:

@vyatta:~$ show configuration

and you’ll see something like this:

Please ignore the user “yotis”. This is not in the default config, but I have changed something there to secure my Vyatta installation. Now you know how the config look.

4.b) Enter the configuration mode:

@vyatta:~$ configure
[edit]
yotis@vyatta#

4.c) Configure the external interface (the one bridged to your physical network).

We want to do this to be able to remote access Vyatta device. Depending of your own IP address subnet, you’ll need to customize the command below to meet your requirements:

set interfaces ethernet eth0 address 1.1.1.1/24

Now, you might wonder, how in the name of God I suppose to know that command. The answer is read the documentation, or do what I did. Press key TAB at the command prompt to check your options. All configuration command start with set and then press TAB key:

@vyatta# set
cluster             firewall            load-balancing      protocols           service             vpn
content-inspection  interfaces          policy              qos-policy          system              zone-policy

we are interested in interfaces here, so:

@vyatta# set interfaces
adsl             bridge           loopback         openvpn          serial           wireless
bonding          ethernet         multilink        pseudo-ethernet  tunnel           wirelessmodem

And then ethernet, interface name, address keyword and the IP address. In the middle of a command keyword, if you press TAB key it will autocomplete the word, exactly like in Cisco CLI.

It’s preety simple. If you get stuck somewhere, get back to documentation or ask in the comments here.

4.d) Add a default gateway if you want to be able to access your system from anywhere (optional)

@vyatta# set protocols static route 0.0.0.0/0 next-hop 1.1.1.100

4.e) Configure the second interface (LAN)

Configure the second interface (bridged to tap0, in my case). This one will be connected to your Cisco router in GNS3. You’ll need to have the same IP subnet on both ends (Vyatta and Cisco), but I believe you already know this. Follow the same steps like in 4.c)

4.f) Commit your changes

No matter what configuration you set, it will not become active until you commit:

@vyatta# commit

4.f) Save your configuration

@vyatta# save
Saving configuration to ‘/opt/vyatta/etc/config/config.boot’…
Done

Almost done. You have configured Vyatta basic network system. Now configure the Cisco router in GNS3. I believe you know how to do that.

5. Test connection between Vyatta and Cisco router

I have 10.86.0.1 on Vyatta eth1 adapter and 10.86.0.2 on Cisco router

@vyatta:~$ ping  10.86.0.2
PING 10.86.0.2 (10.86.0.2) 56(84) bytes of data.
64 bytes from 10.86.0.2: icmp_seq=1 ttl=255 time=3.73 ms
64 bytes from 10.86.0.2: icmp_seq=2 ttl=255 time=1.90 ms
64 bytes from 10.86.0.2: icmp_seq=3 ttl=255 time=5.23 ms
64 bytes from 10.86.0.2: icmp_seq=4 ttl=255 time=4.43 ms
^C
— 10.86.0.2 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3012ms
rtt min/avg/max/mdev = 1.905/3.826/5.230/1.230 ms

This is it for today. In  next posts I will go a little bit deeper in the Vyatta configuration and establish some IGP and BGP connection to see how Cisco behave under different scenarios.