Fundamentals of SIP

The Session Initiation Protocol (SIP) is an IETF-defined signaling protocol, widely used for controlling multimedia communication sessions such as voice and video calls over Internet Protocol (IP). That the definition of SIP according to Wikipedia. This definition is only the beginning, as SIP is a very complex protocol.

You can check IETF RFC, Wikipedia, Cisco.com for details, but most probably you’ll not remember too much about SIP, except if you are dealing with this protocol daily in your work field. As a network engineer you should know at least what is SIP and what it does, so here is a short video that explain SIP fundamentals:



Cisco: Port-channel load-balancing explanation [Part II]

As I promised in Part I of this article, here is the second part covering the port-channel load-balancing method explanation. If you didn’t know what I’m talking about, please be sure that you have read the first part of this article. Everything remains the same in this scenario. We have 3 physical interfaces bundled in one port-channel. Together with this port-channel we have some possible sources and destinations.

In this Part II, I will try to explain the remaining 6 methods of port-channel load-balance:

src-ip
src-mac
src-port
dst-mixed-ip-port
src-mixed-ip-port
src-dst-mixed-ip-port



src-ip / src-mac / src-port

I’ve grouped this 3 methods under one example as the basic principle is the same; Loads distribution on the source IP address / mac-address / port. Ignore completely the destination IP address / mac-address / port.

In the above case all traffic from Source A (depending on the method, this can be IP address A / mac-address A / port A)  is forward through physical interface Fa0/1 in Port-Channel 1, not matter of its destination. Fa0/2 and Fa0/3 are not an alternative in this load-balance methods.

dst-mixed-ip-port

Loads distribution on the destination IP address and TCP or UDP port

This method offer more granularity and we see that we start to have a more complex scenario as this process take into consideration a mix of IP address and TCP/ UDP port. We may have the following scenarios for traffic load-balance:

– from Src A packets to  Dst A and port 80 – Fa0/1 in PO1 – valid alternative
– from Src A packets to the same Dst A, but port 25 – Fa0/2 in PO1 – also valid, because the IP address is the same, but TCP port is different
– from Src A packes to Dst B, same port 25 – Fa0/2 in Po1 – valid as is the same port (25), but different IP address
– from Src A packets to Dst B port 25 – Fa0/3 in Po1 – not valid, as there is already a path through Fa0/2 for the packets matching this destination and port

src-mixed-ip-port

Loads distribution on the source IP address and the TCP or UDP port

This method offer also a great granularity in load-balance process. If we analyze the port communication trend, I would say that this method offer more granularity than the dst-mixed-ip-port one, because source ports are more random chosen in communication than destination port. Here we have the following scenarios:

– Src A and source port 32343 to Dst A – Fa0/1 in PO1 – valid choice
– Src A and source port 32345 to Dst A – Fa0/2 in PO1 – valid choice (same source IP, different source port)
– Src A and source port 32346 to Dst B – Fa0/2 in PO1 – valid choice (same source IP, different source port than previous example); you might think that also different destination, but in this method, destination IP or port are not taken into consideration.
– Src A and source port 32346 to Dst C – Fa0/3 in PO1 – not valid choice as the path for this source IP / source port is already defined through Fa0/2

src-dst-mixed-ip-port

Loads distribution on the source XOR-destination IP address and the TCP or UDP port

The best granularity until now. Almost every path in PO1 is a valid choice. You can just image that a path which is consider not valid is if you have a pair of  SRC IP : PORT -> DST IP: PORT and is already forwarded through one port (Fa0/2), then the Fa0/3 is not a valid choice for the same traffic. Otherwise, there are more possibilities to load balance traffic than in the previous methods. The issue is that not all devices support these last methods (especially the last 3), so if you’re device is capable to support this complex method you have to deal with the other ones and choose the best one for your scenario.

I want to close this article by adding a new load-balance method:

port-channel load-balance mpls

This method set the load-distribution method among the ports in the bundle for Multiprotocol Label Switching (MPLS) packets, use the port-channel load-balance mpls command in global configuration mode. I never had the chance to work with this method, so I don’t know how it’s working exactly (just in theory). If anybody has experience with it, I would be glad to add it’s explanation here (with credits of course).Otherwise you’ll have to wait until I get my hands on this configuration and then I’ll share my knowledge with you.

Virtual WAN Optimization – Blue Coat presentation

Chris Webber from Blue Coat Systems describe the concept of virtualing WAN Optimization and WAN Acceleration systems. Of course that, since Blue Coat Systems is involved, you can consider this video presentation a little bit of marketing strategy, but if you think to this subject, all companies out there do the same. It’s somehow normal.

Skipping the marketing part, this is a good explanation about virtualized WAN Optimization and you can have an overall view of what this means and how it can be implemented. Information is always welcome, not matter from which source, so I would recommend you to spend 10 minutes and watch this video.


Brought to you by NetworkWorld.tv and FirstDigest

The difference between 3G and 4G

2diggsdigg

Excellent explanation about what is 3G and 4G, speed of the download  and different generation of wireless technologies by Craig Mathias.


Brought to you by NetworkWorld.tv and FirstDigest

Cisco: How can MSS help to solve issues in VPN communication

Since a week, I’m stretching my brains to solve a communication problem over a VPN connection. The problem was that connections like SSH over VPN were not successfully completed. Imagine site A (Paris – remote end) and site B (Hamburg – local end).

In the back, of this sites, servers and clients. If somebody tried to connect from a client in site A over SSH to a server in site B, the initial authentication protocol was successful, but as soon as a command was typed on the terminal (like ls -la or ps aux) and the server had to return a bunch of results, the SSH console was completely stuck

Immediately I was thinking that this has to do with MTU size (default 1500 bytes on each site) and the DF (Don’t fragment) bit being set. I have tested SSH over Windows and Linux machine and all the time the DF bit was set ON:


I think it is obvious why I did paint over IP addresses. Interesting part is that only SSH was having this DF bit set. I’ve tried also FTP or regular ping, and there, I could not see the DF bit on.

You image what problems I had. MTU being 1500 Bytes and DF bit on, the packet was not fragmented. With the IPsec VPN overhead, the packet could end up to 1604 bytes (together with point-to-point GRE overhead), so most of the packets got dropped.

If I would decrease the MTU size on the client / server interface to let’s say 1300 bytes, then everything was working fine. However, this was not a scalable solution when you have more clients and servers, so to prevent future issues, I keep on looking for another solution. The salvation came from “ip tcp mss-adjust” command.

You probably know what MSS is, but here is a small description. The maximum segment size (MSS) is an option of the TCP protocol that specifies the largest amount of data, specified in bytes, that a computer or communications device can receive in a single, unfragmented piece. It does not count the TCP header or the IP header. For optimum communications, the number of bytes in the data segment and the headers must not add up to more than the number of bytes in the maximum transmission unit (MTU).

This Maximum Segment Size (MSS) announcement (often mistakenly called a negotiation) is sent from the data receiver to the data sender and says “I can accept TCP segments up to size “X bytes”.

Typically a host bases its MSS value on its outgoing interface’s maximum transmission unit (MTU) size because if the MSS value is set too low, the result is an inefficient use of bandwidth; More packets are required to transmit the data. An MSS value that is set too high could result in an IP datagram that is too large to send and that must be fragmented. In my case if the MTU was 1500, the MSS was also 1500 bytes.  This value didn’t help me at all, due to the issues that I’ve mentioned above.

So, instead of changing every device MTU size to a lower value than 1500 bytes, I’ve decided to change the MSS to a reasonable value that would solve my problem. On Cisco devices, you can configure the MSS in a few straightforward steps:

# configure terminal
# interface FastEthernet 0/0
# ip tcp adjust-mss 1300

The 1300 Bytes is just an example in my case. This value can be something between 500 and 1460 bytes.

Next time when you have problems with connections over VPN, try to se the MSS to a lower value and see if it’s working. If it is, then you are saved. If not…maybe your problem is not related to this topic.

Cisco: Port-channel load-balancing explanation [Part I]

Port-channel (or etherchannel) is a great way to increase the transport capacity between 2 switches or between a switch and an end device that suport load balancing (e.g. server). Today I don’t want to focus on how the Port-channel are configured, but more on how they load-balance the traffic over the multiple interfaces included in a bundle.

To configure the port-channel load balance, you have to be in the config mode and issue:

port-channel load-balance method

or

port-channel load-balance method module slot

the method can be one of the following:

dst-ip
dst-mac
dst-port
src-dst-ip
src-dst-mac
src-dst-port
src-ip
src-mac
src-port
dst-mixed-ip-port
src-dst-mixed-ip-port
src-mixed-ip-port

Not every Cisco device support all this methods, due to hardware / IOS restrictions. Today I plan to explain the first six of this methods and in a future post the next six ones.
The understanding of the port-channel load-balance methods is not a difficult topic, rather a tricky one. That’s why I draw some basic scenarios to help you remember better each load-balance method.

I have to admit that for me was a little bit hard to remember them and how they work, so if I’ll make any mistake or you don’t understand something, please let me know in comments or by contact form.

dst-ip

Loads distribution on the destination IP address. It does not take into consideration the source of the packets, but only the IP destination. If the destination is the same, the packet will be forwarded over the same wire (port Fa0/1 in the image below):

As you can see the packet from Src to Dst IP A, will always take the path through port Fa0/1 in Port-channel 1. In this load-balancing mode, paths through ports Fa0/2 and Fa0/3 are not a valid choice.

dst-mac

Loads distribution on the destination MAC address. If destination mac-address is reachable only through one of the three ports bundled in the port-channel, then only that interface is considered a path for all the packets sent to the same mac destination:

In the image above mac-address A is reachable through port Fa0/1, so that port is considered as an acceptable path.

dst-port

By keyword “port” we understand here TCP or UDP port and not physical interface. Communication for the same port (e.g. port 80) will be load-balanced over one single physical port (Fa0/1)  in port-channel 1. Data transfer for another port (e.g. port 25) will be directed through another interface (Fa0/2) and so on, in a round-robin manner.

src-dst-ip

Loads distribution on the source transfer or XOR-destination IP address. What this method does is to pair source and destination IP address and send all the traffic that match this rule over one physical port in port-channel. The advantage of this method over the above one is granularity. With the above method the traffic to one destination, was sent over one physical port in port-channel without taking into consideration the source. With this method, the packets to the same destination can be forwarded over different interfaces in port-channel if they are coming from different IP source.

In the example above, you see that packets with source A and destination A are fowarded over port Fa0/1. for the same destination A, but source B, a different path is taken over Fa0/2. However from the same source B to the same destination A, Fa0/3 is not a reliable path, as the traffic for this pair is already forwarded over Fa0/2.

src-dst-mac

Here we have the same principle  like in the src-dst-ip, with the difference that mac-address is considered as the load-balancing element instead of IP address:

src-dst-port

Loads distribution on the source XOR-destination port. Port (UDP and TCP protocol) is the element taking into consideration when load-balancing in port-channel:

Even if this may look like the src-dst-mac and src-dst-ip methods actually it is different. You see above that it’s possible to load balance peer-to-peer communication (pc-to-pc if you want) if this communication is taking place over different ports. In the picture above we have port 25 and port 80 communication involving 2 machines. You can see that the traffic from source A to destination port 80 is taking port Fa0/1. From the same source to destination port 25 is taking path through interface Fa0/2. However, the same communication cannot be sent over interface Fa0/3 (as it’s already sent over Fa0/2 as part of the load-balance process). This method add even more granularity.

Come back for the next six methods of port-channel load-balance feature.