Deploying ELK Stack with Docker Compose (2025 Edition)

This guide walks you through installing and configuring the ELK Stack (Elasticsearch, Logstash, Kibana, and Filebeat) using Docker Compose. It is fully updated for Elasticsearch 9.0.2 and explains the necessary changes for versions 8+ and above, including the required security setup and user permissions.

Prerequisites

Ensure you have the following installed on your system:

  • Docker: Install Docker Engine or Docker Desktop.
  • Docker Compose: Typically included with Docker Desktop; otherwise, install it separately

The folder structure for is as follows:

elk-lab/
|-- .env
|-- docker-compose.yml
|-- logstash/
|   |-- pipeline/
|       |-- logstash.conf
|-- filebeat
|   |-- filebeat.yml

Start by creating your root folder structure

mkdir elk-docker && cd elk-docker
mkdir -p logstash/pipeline
mkdir filebeat

Step 1: Prepare Your .env File

Create a .env file in the root of your project directory:

ELASTIC_USER=myadmin
ELASTIC_PASSWORD=mypassw0rd!

For simplicity in this lab setup, all components share the same password. Do not use this setup in production.

Step 2: docker-compose.yml

Create a docker-compose.yml file:

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.type=single-node
      - xpack.security.enabled=true
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "9200:9200"
    volumes:
      - esdata:/usr/share/elasticsearch/data
    networks:
      - elk

  kibana:
    image: docker.elastic.co/kibana/kibana:9.0.2
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=${ELASTIC_USER}
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:9.0.2
    container_name: logstash
    environment:
      - xpack.monitoring.enabled=true
      - ELASTIC_USERNAME=${ELASTIC_USER}
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "5044:5044"
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    depends_on:
      - elasticsearch
    networks:
      - elk

  filebeat:
    image: docker.elastic.co/beats/filebeat:9.0.2
    container_name: filebeat
    user: root
    environment:
      - ELASTICSEARCH_USERNAME=${ELASTIC_USER}
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
    depends_on:
      - logstash
    networks:
      - elk
    command: ["--strict.perms=false"]

volumes:
  esdata:

networks:
  elk:

Step 3: Logstash Pipeline

Create the Logstash pipeline file at ./logstash/pipeline/logstash.conf:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    user => "myadmin"
    password => "mypassw0rd!"
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

Step 4: Filebeat Config

Create the Filebeat config at ./filebeat/filebeat.yml with this content:

filebeat.inputs:
  - type: filestream
    id: my-container-logs
    enabled: true
    paths:
      - /var/lib/docker/containers/*/*.log
    parser:
      - docker

output.logstash:
  hosts: ["logstash:5044"]
  username: "${ELASTIC_USER}"
  password: "${FILEBEAT_PASSWORD}"

setup.kibana:
  host: "kibana:5601"
  username: "${ELASTIC_USER}"
  password: "${FILEBEAT_PASSWORD}"

# Optional for debugging
logging.level: info
logging.to_files: false

Just a note: In this setup the Filebeat output to Logstash (not Elasticsearch) because in general I use ELK with netflow so I can work with more complex logs parsing.

If you chose to send Filebeat to Elasticsearch just adapt the fiebeat.yml file becasue the docker-compose.yml is already ready.

Step 5: Start Elasticsearch Container Before Creating Users and Roles (Temporary Step)

Before creating the custom user and roles, you must start at least the Elasticsearch container once to enable the security APIs.

Run this command in your project directory:

docker compose up -d elasticsearch

If this is the first time starting the container, Docker will first download the elasticsearch image.

Wait a minute for Elasticsearch to fully start and be ready to accept API calls. You can check logs with:

docker logs -f elasticsearch

Only after Elasticsearch is up and running, proceed with creating roles and users in the next step.

Step 6: Create the Custom User and Roles

As of version 8+, Kibana no longer allows login with the elastic user for Fleet and integrations. Instead, you must create a new superuser with additional privileges.

  1. Create a new role that allows access to restricted indices:
curl -X POST http://localhost:9200/_security/role/kibana_system_access \
  -u elastic:mypassw0rd! \
  -H "Content-Type: application/json" \
  -d '{
    "cluster": ["all"],
    "indices": [
      {
        "names": [".kibana*", ".apm*"],
        "privileges": ["all"],
        "allow_restricted_indices": true
      }
    ]
  }'

2. Create a new superuser myadmin:

curl -X POST http://localhost:9200/_security/user/myadmin \
  -u elastic:mypassw0rd! \
  -H "Content-Type: application/json" \
  -d '{
    "password": "mypassw0rd!",
    "roles": ["superuser", "kibana_system_access"],
    "full_name": "Lab Admin"
  }'

After both commands usually there is an output which contains {"created":true} among other details. This is an indication that the role and user were created successfully.

Step 7: Launch the Full Stack

Now that the users and roles are created, start all services:

docker compose up -d

Access Kibana at:

http://localhost:5601

For lab topics, use the IP address of your Docker host:

http://docker_host_ip:5601

Use credentials:

  • Username: myadmin
  • Password: mypasw00rd!

Notes

  • Why not use the elastic user? Since v8+, the elastic user is intended for initial setup and API use only — not for logging into Kibana.
  • Why grant access to restricted indices? Kibana uses internal system indices like .kibana_* and .apm_* that are restricted by default. Your user must explicitly have permissions to manage these indices.
  • Why is security mandatory? Features such as Fleet, Integrations, and Kibana dashboards require security to be enabled, meaning all services must use authenticated users.

phpIPAM in Docker with Nginx reverse-proxy

I have a bit of a problem with this setup serving phpIPAM via Nginx reverse proxy, so I said to share the solution which works for me here maybe will help somebody out there.

I installed phpIPAM as Docker container following the instructions here: https://github.com/phpipam-docker/phpipam-docker.

Using it via plain http was working OK, but I want to use https for a various of reasons. Security is important, but this being a type of home.lab deployment, I wasn’t that concern about somebody “sniffing” on my plain http traffic. The annoying part that I use a Chromium based browser which insist to upgrade the http to https protocol, even when I type the URL with “http://ipam…”

I’ve installed Nginx (on a different machine) did a basic reverse proxy configuration using some self signed certificates. And here the problem started. I will not bore you with all the details, but the redirection was not working well, either it failed all together or the page appear broken with CSS not render correctly and other issues.

Here is what I had to do for a working solution.

On the Docker part (I assume you followed the phpIPAM Docker installation above or you’re familiar with the containerization solution) I had to add the following in the .env file:

 - IPAM_DATABASE_HOST=phpipam-mariadb
 - IPAM_DATABASE_PASS=my_secret_phpipam_pass
 - IPAM_DATABASE_WEBHOST=%
 - TZ=yourtimezone
 - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
 - IPAM_TRUST_X_FORWARD=yes
 - IPAM_DISABLE_INSTALLER=1

Not every line above is relevant for solving the reverse proxy issue, but I chose to share all what I have there. The IPAM_TRUST_X_FORWARD is important for this topic.

Below is what I have in the Nginx config file:

server {
    listen 9443 ssl; # Change to whatever port you're using here
    server_name ipam.home.lab; # replace with your domain

    ssl_certificate /etc/ssl/private/ipam.home.lab.crt;
    ssl_certificate_key /etc/ssl/private/ipam.home.lab.key;

    location / {
        proxy_pass http://phpipam-host.home.lab:81; # Replace with your IP / FQDN and port
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Port $server_port;

        # Add WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # Optionally, you can add additional configurations like error pages or logging here
}

I haven’t notice yet any issue using the setup / configuration illustrated above. Let me know if you find this information useful.

LFNE GNS3 Appliances

This post will be a very short one, more like a note :)

Based on the LFNE Docker images (explained here https://ipnet.xyz/2023/11/lfne-linux-for-network-engineers) I’ve created the GNS3 Appliances for easy import into GNS3.

The GNS3 Appliances can be downloaded here https://github.com/yotis1982/lfne and imported into GNS3.

Have fun!

LFNE – Linux For Network Engineers

Formerly known as PFNE – Python For Network Engineer, the images developed to be more than just for Python learning. My choice was to call the new one more generic and pick the Linux For Network Engineers (LFNE)

Linux images build with all tools need by network engineers to perform various tasks ranging from simple python script to automation and testing.
Below is the list of installed applications on LFNE images. Pull one and start experimenting.

I’m using two main distributions to build these images – Ubuntu and AlmaLinux – pick your favorite flavor. I picked AlmaLinux as is the closest distribution to now (almost) defunct Centos.

LFNE based on Ubuntu 22.04

Pull the image:
docker pull yotis/lfne:ubuntu-22.04
Use the image:
docker run -i -t yotis/lfne:ubuntu-22.04 /bin/bash
If used with Portainer don’t forget to activate the option for Console : Interactive & TTY

LFNE based on AlmaLinux 9.2

Pull the image:
docker pull yotis/lfne:almalinux-9.2
Use the image:
docker run -i -t yotis/lfne:almalinux-9.2 /bin/bash
If used with Portainer don’t forget to activate the option for Console : Interactive & TTY

Some of the installed packages:

Openssl
Net-tools (ifconfig…)
IPutils (ping, arping, traceroute…)
Socat
Host (DNS lookup tool)
Mtr (traceroute tool)
Telnet / SSH client
IProute2
IPerf (traffic generator)
TCPDump
Nmap
Python 2 (only on Ubuntu variant)
Python 3
Paramiko
Netmiko
Ansible
Pyntc
Napalm
Openssh Server

To use remote ssh connection to container

  • Enable it with “service ssh start”
  • Expose the desired port for the container (tcp/22 default)

MicroStack installation fails on Ubuntu 20.04

I needed an instance of Openstack in my home lab for some tests and the first attempt was to deploy it with DevStack all-in-one. Is one of the most common methods out there. However it kept on failing (still need to find out why), so I turned to MicroStack.

MicroStack describe itself as the most straightforward way to install Openstack. I don’t say this is the way to go for Enterprise grade installation, but would do if you want something simple like one or two nodes for testing, learning purposes.

MicroStack uses two commands to have an Openstack instance up and running:

sudo snap install microstack --beta
$
sudo microstack init --auto --control

You can read a more detailed “how-to” on the Ubuntu or MicroStack page. One note, the entire topic is in Beta stage.

I’ve tried deploying multiple time on fresh Ubuntu 20.04 installation and everytime I’ve ended up with the error below. I’m adding the entire text, just in case you encounter an error at certain installation stage and want to check if is the same like mine:

sudo microstack init --auto --control
2022-11-02 20:21:19,950 - microstack_init - INFO - Configuring clustering ...
2022-11-02 20:21:20,454 - microstack_init - INFO - Setting up as a control node.
2022-11-02 20:21:24,066 - microstack_init - INFO - Generating TLS Certificate and Key
2022-11-02 20:21:26,187 - microstack_init - INFO - Configuring networking ...
2022-11-02 20:21:42,675 - microstack_init - INFO - Opening horizon dashboard up to *
2022-11-02 20:21:43,807 - microstack_init - INFO - Waiting for RabbitMQ to start ...
Waiting for 172.31.82.163:5672
2022-11-02 20:21:56,629 - microstack_init - INFO - RabbitMQ started!
2022-11-02 20:21:56,629 - microstack_init - INFO - Configuring RabbitMQ ...
2022-11-02 20:21:58,753 - microstack_init - INFO - RabbitMQ Configured!
2022-11-02 20:21:58,953 - microstack_init - INFO - Waiting for MySQL server to start ...
Waiting for 172.31.82.163:3306
2022-11-02 20:23:08,775 - microstack_init - INFO - Mysql server started! Creating databases ...
2022-11-02 20:23:14,509 - microstack_init - INFO - Configuring Keystone Fernet Keys ...
2022-11-02 20:26:07,658 - microstack_init - INFO - Bootstrapping Keystone ...
2022-11-02 20:26:21,999 - microstack_init - INFO - Creating service project ...
2022-11-02 20:26:27,938 - microstack_init - INFO - Keystone configured!
2022-11-02 20:26:28,257 - microstack_init - INFO - Configuring the Placement service...
2022-11-02 20:26:49,572 - microstack_init - INFO - Running Placement DB migrations...
2022-11-02 20:27:09,282 - microstack_init - INFO - Configuring nova control plane services ...
2022-11-02 20:27:22,369 - microstack_init - INFO - Running Nova API DB migrations (this may take a lot of time)...
2022-11-02 20:29:02,089 - microstack_init - INFO - Running Nova DB migrations (this may take a lot of time)...
Waiting for 172.31.82.163:8774
2022-11-02 20:39:31,994 - microstack_init - INFO - Creating default flavors...
2022-11-02 20:39:59,738 - microstack_init - INFO - Configuring nova compute hypervisor ...
2022-11-02 20:39:59,738 - microstack_init - INFO - Checking virtualization extensions presence on the host
2022-11-02 20:39:59,756 - microstack_init - WARNING - Unable to determine hardware virtualization support by CPU vendor id "GenuineIntel": assuming it is not supported.
2022-11-02 20:39:59,756 - microstack_init - WARNING - Hardware virtualization is not supported - software emulation will be used for Nova instances
2022-11-02 20:40:06,690 - microstack_init - INFO - Configuring the Spice HTML5 console service...
2022-11-02 20:40:08,564 - microstack_init - INFO - Configuring Neutron
Waiting for 172.31.82.163:9696
Traceback (most recent call last):
  File "/snap/microstack/245/bin/microstack", line 11, in <module>
    load_entry_point('microstack==0.0.1', 'console_scripts', 'microstack')()
  File "/snap/microstack/245/lib/python3.8/site-packages/microstack/main.py", line 44, in main
    cmd()
  File "/snap/microstack/245/lib/python3.8/site-packages/init/main.py", line 60, in wrapper
    return func(*args, **kwargs)
  File "/snap/microstack/245/lib/python3.8/site-packages/init/main.py", line 228, in init
    question.ask()
  File "/snap/microstack/245/lib/python3.8/site-packages/init/questions/question.py", line 210, in ask
    self.yes(awr)
  File "/snap/microstack/245/lib/python3.8/site-packages/init/questions/__init__.py", line 887, in yes
    check('openstack', 'network', 'create', 'test')
  File "/snap/microstack/245/lib/python3.8/site-packages/init/shell.py", line 69, in check
    raise subprocess.CalledProcessError(proc.returncode, " ".join(args))
subprocess.CalledProcessError: Command 'openstack network create test' returned non-zero exit status 1.

I’ve did some research and I found some hints about the need to manually install Python on a fresh Ubuntu 20.04 instance:

sudo apt install python python-dev

After installing Python all worked like a charm:

sudo microstack init --auto --control
# Skipped text #
2022-11-02 21:18:18,159 - microstack_init - INFO - Configuring the Spice HTML5 console service...
2022-11-02 21:18:19,503 - microstack_init - INFO - Configuring Neutron
Waiting for 172.31.82.163:9696
2022-11-02 21:19:21,615 - microstack_init - INFO - Configuring Glance ...
Waiting for 172.31.82.163:9292
2022-11-02 21:20:53,119 - microstack_init - INFO - Adding cirros image ...
2022-11-02 21:20:57,002 - microstack_init - INFO - Creating security group rules ...
2022-11-02 21:21:09,046 - microstack_init - INFO - Configuring the Cinder services...
2022-11-02 21:22:10,868 - microstack_init - INFO - Running Cinder DB migrations...
2022-11-02 21:23:31,155 - microstack_init - INFO - restarting libvirt and virtlogd ...
2022-11-02 21:23:42,260 - microstack_init - INFO - Complete. Marked microstack as initialized!

For some reason the MicroStack initialization process doesn’t detect Python installation or more like it the lack of.

If you have this error during installation, let me know if manual Python installation does the job.