How to create your own Docker image

I mentioned in my previous post that I’ll explain how to create your own Docker image and customize it however you’d like. While is great to just use an image from Docker Hub, it can be that you need some customized image to fit your needs. As said before, is not hard at all to create the image and worth knowing how to do it.

I’ll use for this tutorial a fresh Ubuntu 18.04 minimal installation. You can follow the same steps (or almost) using different Linux distro, Microsoft Windows or MacOS. The reason why I chose Ubuntu is simply because is the distro that I’m most familiar and enjoy working with.

For all steps below you need to be root or run the commands via sudo. So you’ll see either # at the begining of the command if you’re root or $ sudo if you pick to run it with elevated rights.

Install Docker

# apt install -y docker.io

A word of advice here. Be sure to have docker.io typed. If you miss the .io, the system will install a docker, but that’s a different package:

docker/bionic 1.5-1build1 amd64
  System tray for KDE3/GNOME2 docklet applications

You’ll end up with something that cannot be used for what we want to achieve, since the docker command isn’t even there.

You can test if the installation completed successfully by using the following command:

# service docker status

You should see something like this in the output:

Docker service successful status

Since this is a new installation, you’ll have no images, no containers, nothing.
You can check, just to be sure.

# docker image ls

The result should be:

Docker image ls return nothing

I’ll add at the end of the post some basic (and most important) Docker commands to get you started.

Pull Ubuntu 18.04 image – Optional step

This step is optional, but I’d advise to do it, just to test that everything is fine with your Docker installation In this case we’re going to use the official Ubuntu 18.04 minimal Docker image. If you want to read more about this image you can check the explanation on Ubuntu 18.04 minimal Docker image and check their repository on Docker Hub – Ubuntu.

# docker pull ubuntu:18.04

If everything goes well you should see a message ending with “Status: Downloaded newer image for ubuntu:18.04” :

Docker successful download of Ubuntu image

Time to run our first container:

# docker run -i -t ubuntu:18.04 /bin/bash

You should be now in container shell:

Docker container

Now that we tested you can type exit to leave the container.

Create Dockerfile

The Dockerfile is nothing more than a text document which contains all the commands a user could call on the command line to create an image.
A detailed explanation is beyond the scope of this post, but if you’d like to learn more, you can check the Docker Documentation – Dockerfile

Here is a sample that’s good to start with:

# My custom Ubuntu 18.04 with 
# various network tools installed
# Build image with:  docker build mycustomlinux01 .


FROM ubuntu:18.04
MAINTAINER Calin C., https://github.com/yotis1982
RUN apt-get update --fix-missing
RUN apt-get upgrade -y
RUN apt-get install -y software-properties-common
RUN apt-get install -y build-essential
RUN apt-get install -y net-tools mtr curl host
RUN apt-get install -y iputils-arping iputils-ping iputils-tracepath
RUN apt-get install -y iproute2
RUN apt-get install -y traceroute
RUN apt-get install -y tcpdump

A short explanation:

# – This is a comment, add here whatever you think is useful. I’ve picked the name “mycustomlinux01”, but you can add whatever you like.
FROM – is always your first instruction, because it names the base image you’re building your new image from.
MAINTAINER – is the creator of the Dockerfile.
RUN – instruction to run the specified command, in this case apt-get to install various packages

There are multiple instructions for setting environment variables like ADD, COPY, ENV, EXPOSE, LABEL, USER, WORKDIR, VOLUME, STOPSIGNAL, and ONBUILD. You can read all about them in the Docker Documentation – Dockerfile

Using RUN you can add whatever package you need in your custom image. The same like you would do on a regular Ubuntu installation.
Yes, all the packages above could have been added in one RUN line, but for the sake of better visibility I would suggest to have separate lines.

Create your custom Docker image

After you save the Dockerfile is time to create your image

# docker build -t mycustomlinux01 .

You’ll see a lot of output, the same like when you’re installing new packages in any Linux distro. When you see the following lines, you’ll know that the image was successful created:

Docker successful image creation

Let’s check if the image is listed using:

# docker image ls

You should see the mycustomlinux01 image listed:

List my Docker image

Since the image is created successful I’d suggest that you run a container using this image following the same steps like in the “Pull Ubuntu 18.04 image”

Basically that’s it, you just created your custom image.

As mentioned above, here is a list of commands that I find useful to have at hand when working with Docker containers.

List images:

# docker image ls

Start a container from an image:

# docker run -i -t ubuntu:12.04 /bin/bash

Using an ID (you get the ID from List image command):

# docker run -i -t 8dbd9e392a96 /bin/bash

List all containers:

# docker ps -a

List running containers:

# docker ps -l

Attach running container:

# docker attach “container ID”

Remove a container:

# docker rm “container ID”

Last but not least. If you liked my Ubuntu 18.04 Docker image customized for network engineers who wants to learn Python and you would like to install additional packages, here is the Dockerfile:

# Ubuntu 18.04 with Python, Paramiko, Netmiko, Ansible
# various other network tools installed and SSH activated
# Build image with:  docker build -t yotis/ubuntu1804-pfne .

FROM ubuntu:18.04
MAINTAINER Calin C., https://github.com/yotis1982
RUN apt-get update --fix-missing
RUN apt-get upgrade -y
RUN apt-get install -y software-properties-common
RUN apt-get install -y build-essential
RUN apt-get install -y openssl libssl-dev libffi-dev
RUN apt-get install -y net-tools mtr curl host socat
RUN apt-get install -y iputils-arping iputils-ping iputils-tracepath
RUN apt-get install -y iproute2
RUN apt-get install -y iptraf-ng traceroute
RUN apt-get install -y tcpdump nmap
RUN apt-get install -y iperf iperf3
RUN apt-get install -y python python-pip python-dev
RUN apt-get install -y python3 python3-pip python3-dev
RUN apt-get install -y openssh-client telnet
RUN apt-get install -y nano
RUN apt-get install -y netcat
RUN apt-get install -y socat
RUN pip install --upgrade pip
RUN pip install cryptography
RUN pip install paramiko
RUN pip install netmiko
RUN pip install pyntc
RUN pip install napalm
RUN apt-add-repository ppa:ansible/ansible
RUN apt-get update
RUN apt-get install -y ansible
RUN apt-get clean
VOLUME [ "/root" ]
WORKDIR [ "/root" ]
CMD [ "sh", "-c", "cd; exec bash -i" ]

Obviously there is more about Docker than is covered on this post. It wasn’t in my scope to make a detailed analyze of Docker, rather a cheatsheet on how to create your custom image. If you want to learn more there are plenty resources out there and a good starting point is the Docker website.

I hope you find this how-to useful. As always, if you need to add something or you have questions about, please use the Comments form to get in contact with me.

New Ubuntu 18.04 Docker image – Python For Network Engineers

About one year ago I’ve created the Ubuntu 16.04 PFNE Docker image. It’s time for a new version of the Ubuntu PFNE Docker image to support Network engineers learn Python and test automation.

Recently, Ubuntu announced that on the Ubuntu Docker Hub the 18.04 LTS version is using the minimal image.

With this change when launching a Docker instance using

$ docker run ubuntu:18.04

you’ll have an instance with the latest Minimal Ubuntu.

While this is great, especially if you need to quickly pull an image, the fact stays that it doesn’t have preinstalled the necessary tools to test network automation, learn Python or run some QoS tests using packages like IPerf.

Based on My previous Ubuntu 16.04 PFNE Docker image, I’ve created the same using the new Ubuntu 18.04 LTS minimal image.

It contains all the tools found in Ubuntu 16.04 PFNE:

  • Openssl
  • Net-tools (ifconfig..)
  • IPutils (ping, arping, traceroute…)
  • IProute
  • IPerf
  • TCPDump
  • NMAP
  • Python 2
  • Python 3
  • Paramiko (python ssh support)
  • Netmiko (python ssh support)
  • Ansible (automation)
  • Pyntc
  • NAPALM

and two new additions:

  • Netcat
  • Socat

I’ve added these two because some blog followers asked me, after reading the Ubuntu image for eve-ng – Python For Network Engineers post, if I can add to image servers installation like web, ftp, etc.

Personally, I don’t think is needed to burden the image with these extra packages. You can have tools like Netcat testing various servers. This is one of the reasons I’ve added Netcat and Socat.

It’s easy for me to add them to this image or future ones (and I’ll do it if I get more requests), however I’m planning some articles on how to do your own Docker images and add whatever packages you need.

While writing this post, time to push it to Docker Hub :)
Docker push

If you want to test the new Ubuntu 18.04 PFNE Docker image, please pull it from Docker Hub:

$ docker pull yotis/ubuntu1804-pfne

To start it use:

$ docker run -i -t yotis/ubuntu1804-pfne /bin/bash

Let me know if you find this useful, happy testing and most important Never Stop Learning!

How to integrate F5 BIG-IP VE with GNS3

I would like to start by saying Merry Christmas and Happy Holidays season to all. In between spending time with my family, decorating the Christmas three and opening presents, I did find some time to play around with my hobby and testing something in the lab.

Lately I wanted to get a feeling how F5 BIG-IP works, you know, just to get familiar with its interfaces, rules and being capable of setting up a basic LTM or APM. Far from me the idea of becoming an expert on the first touch, but it’s nice to discover new technologies.

Beside getting the F5 BIG-IP VE (Virtual Edition), running up VMware (ESXi, Player, Fusion or Workstation) and starting the virtual machine I also wanted to emulate some kind of real environment to test. So, I did build the below topology in GNS3:

F5 BIG-IP Simple setup

Some explanation:

  • Client WIN7 is a VM in VirtualBox and integrated in GNS3
  • WWW Servers are VMs in VirtualBox and integrated in GNS3
  • WIN2008 AD DC is a VM in VirtualBox and integrated in GNS3
  • Routers are emulated in GNS3
  • F5 BIG-IP VE is a VM in VMware Workstation and integrated as a Cloud in GNS3

GNS3 is version 1.2.1 which works perfect. Why VirtualBox and VMware Workstation? Usually I have no problem to have my VMs in VirtualBox, but I could not successfully import the F5 BIG-IP VE OVA image in VirtualBox. I had to download a trial version of VMware Workstation to install the OVA image.

If you want to know more about this F5 product, Ethan Banks has a great article about the BIG-IP VE. Please note that Ethan’s article is about getting a lab license for BIG-IP VE. I just went for the trial version. You can download the OVA image and get the license here:
https://www.f5.com/trial/secure/big-ip-ltm-virtual-edition.php

Download the BIG-IP VE OVA image, get a trial license (valid for 90 days) and install it in VMware Workstation. It may work with other VMware products, but in this article I’m using only VMware Workstation.

The part that gave me some headache was the how to have a successfully network communication between VMware Workstation and GNS3.

Before GNS3 1.2.1, when I had to use a “cloud” to integrate VirtualBox VMs in GNS3, I was configuring a TAP interface and use Bridge mode for the VM NIC to the TAP interface. Then on the GNS3 Cloud, I was adding the TAP as a Generic Ethernet NIO on the NIO Ethernet. If you want to refresh more deeply the above information please read my article about How to integrate GNS3 with VirtualBox.

Unfortunately, in VMware Workstation, I cannot just bridge a VMnet interface to a TAP and use that specific VMnet in a VM. I just could not make it work.

To cut it short, here are the steps that I had to follow to have this working. I assume that you have VMware Workstation installed already. Another detail is that I’m using Ubuntu 14.04 to test the entire scenario.

1. Add two VMnet interfaces in VMware Workstation Virtual Network Editor

Use the image below to have an idea what I mean.

Virtual Network Editor

2. Configure the BIG-IP VE NIC as follow in VMware Workstation

I assume that you have the BIG-IP VE OVA imported in VMware Workstation

BIG-IP VE NIC

I had 4 NICs originally, but I only need three:

  • VMnet0 is bridge to my real LAN interface so I can manage the F5 BIG-IP VE over Web / CLI interfaces
  • VMnet11 – one “internal” interface facing LAN (server side)
  • VMnet22 – one “external” interface facing WAN (client side)

3. Configure two tap interfaces for F5 BIG-IP VE to be used in GNS3

11 – internal, 22 – external

sudo tunctl -u user -t tap11
sudo tunctl -u user -t tap22

*user = the non-root user which you use on Ubuntu host.

If you are having problems to find tunctl command please do the following:

sudo apt-get install uml-utilities bridge-utils

Bring the interfaces up

sudo ifconfig tap11 up
sudo ifconfig tap22 up

4. Remove the IP addresses on both TAP and VMnet interfaces

sudo ifconfig tap11 0.0.0.0 promisc up
sudo ifconfig tap22 0.0.0.0 promisc up
sudo ifconfig vmnet11 0.0.0.0 promisc up
sudo ifconfig vmnet22 0.0.0.0 promisc up

5. Bridge the TAP and the VMnet interfaces

sudo brctl addbr br11
sudo brctl addif br11 tap11
sudo brctl addif br11 vmnet11
sudo brctl addbr br22
sudo brctl addif br22 tap22
sudo brctl addif br22 vmnet22

Bring the bridge interfaces up

sudo ifconfig br11 up
sudo ifconfig br22 up

5. Add the F5 BIG-IP VE to GNS3

If with GNS3 1.2.1 you can add the VirtualBox VMs directly, for the VMware Workstation (Player, Fusion, etc…) VMs you still need to you the Cloud part.

My GNS3 for F5 topology looks like this:

F5 topology in GNS3

And the GNS3 Cloud (representing the F5 BIG-IP VE) settings are the following:

F5 GNS3 Cloud settings

6. Connect the GNS3 Cloud interfaces to R1 and R2

Like shown in the image above, connect the TAP interface of the Cloud to the peer routers.

I’m running all applications (GNS3, VMware Workstation, VirtualBox) as non-root user. If you’re doing the same an error may occur in GNS3. Something like:

Server error [-3200] from x.x.x.x:8000: R1: unable to create TAP NIO

If this is the case, please run the following command on Ubuntu host:

sudo setcap cap_net_admin,cap_net_raw=ep /usr/local/bin/dynamips

This will help you setup the environment to test F5 BIG-IP VE in a lab environment totally virtualized. I’m not going to cover here how to configure the F5 BIG-IP VE. Maybe in one of my next articles.

If you encounter problems, please let me know in Comments.

GNS3 1.2.1 installation on Ubuntu 14.04

As mentioned in an earlier post GNS3 is moving ahead fast. Currently at version 1.2.1 the GNS3 is looking great. Compared with the version 1.0 Beta 1 which I had installed, the 1.2.1 is not only more stable, but it has the Menu more clean and compact. For example now there is only one Preferences menu where you can adjust all your settings.

During the installation of 1.0 Beta 1 I made some notes in Evernote and it prove to be very useful as the installation was pretty messy. With 1.2.1 I did the same thing, but the installation was very smooth. Still, I said that if I made those notes maybe I should share them for those interested in a quick installation. A more complete guide can be found on GNS3 Community.

1. Download GNS3 1.2.1

Head over to http://www.gns3.com/, create and account and download the bundle archive for Linux.

If you for some reason you don’t want to create an account, you may download each package individually from https://github.com/GNS3

The following lines will assume that you have the bundle archive.

2. Install Ubuntu 14.04 dependencies

$ sudo apt-get install libpcap-dev uuid-dev libelf-dev cmake
$ sudo apt-get install python3-setuptools python3-pyqt4 python3-ws4py python3-netifaces python3-zmq python3-tornado
$ sudo apt-get install unzip 

3. Unzip the bundle archive

$ unzip GNS3-1.2.1.source.zip

You should see 5 packages in GNS3-1.2.1 folder:
dynamips-0.2.14.zip
gns3-server-1.2.1.zip
gns3-gui-1.2.1.zip
iouyap-0.95.zip
vpcs-0.6.zip

4. Install Dynamips

$ unzip dynamips-0.2.14.zip
$ cd dynamips-0.2.14
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install

To check if the correct version is install:

$ dynamips | grep version

You should see in the output 0.2.14

5. Install GNS3 Server

$ unzip gns3-server-1.2.1.zip
$ cd gns3-server-1.2.1
$ sudo python3 setup.py install

To check if the GNS3 Server is installed correctly:

$ gns3server

If you see some output other than an error, than you’re fine.

6. Install GNS3 GUI

$ unzip gns3-gui-1.2.1.zip
$ cd gns3-gui-1.2.1
$ sudo python3 setup.py install

To test if the installation is working:

$ gns3

You should see a graphical interface of GNS3 launched.

At this moment you have a working GNS3 environment if you want only want to test Cisco hardware emulators. I strongly recommend to continue and install also the rest of the components. Who knows when you’ll need them

7. Install IOUyap (Optional, if you will use IOU images)

$ unzip iouyap-0.95.zip
$ cd iouyap-0.95.zip
$ make
$ sudo make install

To test the installation:

$ iouyap -h

If you encounter an error, please check the [Update 1] section at the bottom of this article.

8. Install VPCS (Optional, if you want to use VirtualPC)

$ unzip vpcs-0.6.zip
$ cd vpcs-0.6/src
$ ./mk.sh 64
$ cp vpcs /usr/bin/

For the third line, the 64 represent 64bit, as my Ubuntu 14.04 is build on 64bit.
The values can be:
– 32 or i386 for 32bit OS
– 64 or amd64 for 64bit OS

Please be sure to use the correct one for your OS.

To test the VPCS:

$ vpcs

You should see a Virtual PC being launched. Leave the console with letter q.

9. Install VirtualBox (Optional, if you want to launch VMs)

Download the correct version for your system from https://www.virtualbox.org/wiki/Linux_Downloads. The following lines will assume an Ubuntu 14.04 64bit OS.

$ apt-get install dkms
$ dpkg -i virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb

You can also use the instructions at https://www.virtualbox.org/wiki/Linux_Downloads and go for an APT installation.The choice is yours.

10. Install Qemu (Optional, if you want to use qemu images)

$ sudo apt-get install qemu

11. Install IOU (Optional, if you want to use IOU images)

I’m not a legal matter expert, and the usage of IOU is sort of grey area. Because of this, I’m not going to cover this chapter.

You’re ready to go. Start the GNS3 GUI:

$ gns3

Some things to check before going live:

  • check in the menu Edit > Preferences to set your desired Paths (in General sections) and to check the paths for the binaries (dynamips, vpcs, iou, virtualbox…)
  • add the IOS, virtualbox vm, iou images
  • in case of Cisco hardware emulators don’t forget to find the IdlePC value (when you add the IOS image or later with the start of your first router with a certain image) otherwise your CPUs will cry.

If something does not work as described or you need help please let me know in Comments.

[Update 1]

If you get the following error during installation of iouyap:

GNS3-1.2.2.source/iouyap-0.95 $ make
gcc -g -DDEBUG -Wall -c -o iouyap.o iouyap.c
iouyap.c:40:23: fatal error: iniparser.h: No such file or directory
#include
^
compilation terminated.
make: *** [iouyap.o] Error 1

Try to install the iniparser as follows:

sudo apt-get install flex bison

then

cd /tmp
curl -L https://github.com/ndevilla/iniparser/archive/master.tar.gz | tar -xz
cd iniparser*
make

and finally iouyap

cd /tmp
curl -L https://github.com/GNS3/iouyap/archive/master.tar.gz | tar -xz
cd iouyap*
bison -ydv netmap_parse.y
flex netmap_scan.l
gcc -Wall *.c -I /tmp/iniparser*/src -L /tmp/iniparser* -o iouyap -liniparser -lpthread
strip --strip-unneeded iouyap
sudo mv iouyap /usr/local/bin

Thanks to mweisel @ forum.gns3.net for this update!

vCSA Web Management Network error

A few days ago I installed two additional NICs in my server that handle the virtual machine for vCenter Server Appliance (vCSA).

After the NICs installation, the Management web interface for vCSA was showing some strange error (see image below).

Safari:

vCenter Server Appliance

Firefox:

vCenter Server Appliance

I added two images to show you that the error is almost the same and not browser related.

Next I went online and tried to find a way to fix this issue. Among other stuff I also updated the vCSA, but unfortunately nothing helped.

Finally after a lot of research I found the trouble to be caused not by the VMware code, but something in SUSE Linux OS (on which vCSA is built on). Apparently I had to manually add the new NICs configuration in SUSE:

vi /etc/sysconfig/networking/devices/ifcfg-eth2

Add the follwing lines:

DEVICE=eth2
BOOTPROTO='static'
STARTMODE='auto'
TYPE=Ethernet
USERCONTROL='no'
IPADDR='10.0.0.35'
NETMASK='255.255.255.0'
BROADCAST='10.0.0.255'

Then add a symbolic link in the right place:

ln -s /etc/sysconfig/networking/devices/ifcfg-eth2 /etc/sysconfig/network/ifcfg-eth2

You need this configuration for each one of your NICs. Of course you need to adapt the configuration for your NICs (eth1, eth2…)

It looks better now:

vCenter Server Appliance

There may be an easier way to fix this problem, but for me, the above solution worked just fine. If you encounter this error and fix it in another way, please feel free to let me know.