Deploying ELK Stack with Docker Compose (2025 Edition)

This guide walks you through installing and configuring the ELK Stack (Elasticsearch, Logstash, Kibana, and Filebeat) using Docker Compose. It is fully updated for Elasticsearch 9.0.2 and explains the necessary changes for versions 8+ and above, including the required security setup and user permissions.

Prerequisites

Ensure you have the following installed on your system:

  • Docker: Install Docker Engine or Docker Desktop.
  • Docker Compose: Typically included with Docker Desktop; otherwise, install it separately

The folder structure for is as follows:

elk-lab/
|-- .env
|-- docker-compose.yml
|-- logstash/
|   |-- pipeline/
|       |-- logstash.conf
|-- filebeat
|   |-- filebeat.yml

Start by creating your root folder structure

mkdir elk-docker && cd elk-docker
mkdir -p logstash/pipeline
mkdir filebeat

Step 1: Prepare Your .env File

Create a .env file in the root of your project directory:

ELASTIC_USER=myadmin
ELASTIC_PASSWORD=mypassw0rd!

For simplicity in this lab setup, all components share the same password. Do not use this setup in production.

Step 2: docker-compose.yml

Create a docker-compose.yml file:

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.type=single-node
      - xpack.security.enabled=true
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "9200:9200"
    volumes:
      - esdata:/usr/share/elasticsearch/data
    networks:
      - elk

  kibana:
    image: docker.elastic.co/kibana/kibana:9.0.2
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=${ELASTIC_USER}
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:9.0.2
    container_name: logstash
    environment:
      - xpack.monitoring.enabled=true
      - ELASTIC_USERNAME=${ELASTIC_USER}
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    ports:
      - "5044:5044"
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    depends_on:
      - elasticsearch
    networks:
      - elk

  filebeat:
    image: docker.elastic.co/beats/filebeat:9.0.2
    container_name: filebeat
    user: root
    environment:
      - ELASTICSEARCH_USERNAME=${ELASTIC_USER}
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
    depends_on:
      - logstash
    networks:
      - elk
    command: ["--strict.perms=false"]

volumes:
  esdata:

networks:
  elk:

Step 3: Logstash Pipeline

Create the Logstash pipeline file at ./logstash/pipeline/logstash.conf:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    user => "myadmin"
    password => "mypassw0rd!"
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

Step 4: Filebeat Config

Create the Filebeat config at ./filebeat/filebeat.yml with this content:

filebeat.inputs:
  - type: filestream
    id: my-container-logs
    enabled: true
    paths:
      - /var/lib/docker/containers/*/*.log
    parser:
      - docker

output.logstash:
  hosts: ["logstash:5044"]
  username: "${ELASTIC_USER}"
  password: "${FILEBEAT_PASSWORD}"

setup.kibana:
  host: "kibana:5601"
  username: "${ELASTIC_USER}"
  password: "${FILEBEAT_PASSWORD}"

# Optional for debugging
logging.level: info
logging.to_files: false

Just a note: In this setup the Filebeat output to Logstash (not Elasticsearch) because in general I use ELK with netflow so I can work with more complex logs parsing.

If you chose to send Filebeat to Elasticsearch just adapt the fiebeat.yml file becasue the docker-compose.yml is already ready.

Step 5: Start Elasticsearch Container Before Creating Users and Roles (Temporary Step)

Before creating the custom user and roles, you must start at least the Elasticsearch container once to enable the security APIs.

Run this command in your project directory:

docker compose up -d elasticsearch

If this is the first time starting the container, Docker will first download the elasticsearch image.

Wait a minute for Elasticsearch to fully start and be ready to accept API calls. You can check logs with:

docker logs -f elasticsearch

Only after Elasticsearch is up and running, proceed with creating roles and users in the next step.

Step 6: Create the Custom User and Roles

As of version 8+, Kibana no longer allows login with the elastic user for Fleet and integrations. Instead, you must create a new superuser with additional privileges.

  1. Create a new role that allows access to restricted indices:
curl -X POST http://localhost:9200/_security/role/kibana_system_access \
  -u elastic:mypassw0rd! \
  -H "Content-Type: application/json" \
  -d '{
    "cluster": ["all"],
    "indices": [
      {
        "names": [".kibana*", ".apm*"],
        "privileges": ["all"],
        "allow_restricted_indices": true
      }
    ]
  }'

2. Create a new superuser myadmin:

curl -X POST http://localhost:9200/_security/user/myadmin \
  -u elastic:mypassw0rd! \
  -H "Content-Type: application/json" \
  -d '{
    "password": "mypassw0rd!",
    "roles": ["superuser", "kibana_system_access"],
    "full_name": "Lab Admin"
  }'

After both commands usually there is an output which contains {"created":true} among other details. This is an indication that the role and user were created successfully.

Step 7: Launch the Full Stack

Now that the users and roles are created, start all services:

docker compose up -d

Access Kibana at:

http://localhost:5601

For lab topics, use the IP address of your Docker host:

http://docker_host_ip:5601

Use credentials:

  • Username: myadmin
  • Password: mypasw00rd!

Notes

  • Why not use the elastic user? Since v8+, the elastic user is intended for initial setup and API use only — not for logging into Kibana.
  • Why grant access to restricted indices? Kibana uses internal system indices like .kibana_* and .apm_* that are restricted by default. Your user must explicitly have permissions to manage these indices.
  • Why is security mandatory? Features such as Fleet, Integrations, and Kibana dashboards require security to be enabled, meaning all services must use authenticated users.

phpIPAM in Docker with Nginx reverse-proxy

I have a bit of a problem with this setup serving phpIPAM via Nginx reverse proxy, so I said to share the solution which works for me here maybe will help somebody out there.

I installed phpIPAM as Docker container following the instructions here: https://github.com/phpipam-docker/phpipam-docker.

Using it via plain http was working OK, but I want to use https for a various of reasons. Security is important, but this being a type of home.lab deployment, I wasn’t that concern about somebody “sniffing” on my plain http traffic. The annoying part that I use a Chromium based browser which insist to upgrade the http to https protocol, even when I type the URL with “http://ipam…”

I’ve installed Nginx (on a different machine) did a basic reverse proxy configuration using some self signed certificates. And here the problem started. I will not bore you with all the details, but the redirection was not working well, either it failed all together or the page appear broken with CSS not render correctly and other issues.

Here is what I had to do for a working solution.

On the Docker part (I assume you followed the phpIPAM Docker installation above or you’re familiar with the containerization solution) I had to add the following in the .env file:

 - IPAM_DATABASE_HOST=phpipam-mariadb
 - IPAM_DATABASE_PASS=my_secret_phpipam_pass
 - IPAM_DATABASE_WEBHOST=%
 - TZ=yourtimezone
 - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
 - IPAM_TRUST_X_FORWARD=yes
 - IPAM_DISABLE_INSTALLER=1

Not every line above is relevant for solving the reverse proxy issue, but I chose to share all what I have there. The IPAM_TRUST_X_FORWARD is important for this topic.

Below is what I have in the Nginx config file:

server {
    listen 9443 ssl; # Change to whatever port you're using here
    server_name ipam.home.lab; # replace with your domain

    ssl_certificate /etc/ssl/private/ipam.home.lab.crt;
    ssl_certificate_key /etc/ssl/private/ipam.home.lab.key;

    location / {
        proxy_pass http://phpipam-host.home.lab:81; # Replace with your IP / FQDN and port
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Port $server_port;

        # Add WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # Optionally, you can add additional configurations like error pages or logging here
}

I haven’t notice yet any issue using the setup / configuration illustrated above. Let me know if you find this information useful.

Kerberos tickets on Mac OS

I’m using Mac at work and I found out that Kerberos needs sometimes a “kick” for the SSO to work properly. Sometimes after being offline the renewal of Kerberos ticket fails (especially when remote and connected via ZTA or VPN), even though everything looks alright in the “Ticket Viewer” app.

Here is we where the CLI came in handy, so I said to document the few steps here maybe somebody else needs them. Terminal app is your friend to go for the next lines.

To view the current Kerberos tickets:

klist -v

If there are no tickets, which I expect when I have a problem, there is an empty return

To request a ticket:

kinit -V -p [email protected]

The return will request you to enter your password and announce that your ticket request is placed:

[email protected]'s password:
Placing tickets for '[email protected]' in cache 'API:AAAAAAAA-BBBB-CCCC-DDDD-CCCCCCCCCCCC'

Sometimes you may need to use a specific AD Domain Controller server and while the output is the same like above, the command line needs to change (below I use the FQDN, but IP will work as well):

kinit --kdc-hostname=AD-DC-SERVER.EXAMPLE.COM -V -p [email protected]

Now you should see a ticket issued:

klist -v
Credentials cache: API:AAAAAAAA-BBBB-CCCC-DDDD-CCCCCCCCCCCC
        Principal: [email protected]
    Cache version: 0

Server: krbtgt/[email protected]
Client: [email protected]
Ticket etype: aes256-cts-hmac-sha1-96, kvno 15
Ticket length: 4992
Auth time:  Jan 14 06:42:56 2025
End time:   Jan 14 16:42:50 2025
Ticket flags: enc-pa-rep, pre-authent, initial, proxiable, forwardable
Addresses: addressless

I hope you’ll find this useful if in need.

LFNE GNS3 Appliances

This post will be a very short one, more like a note :)

Based on the LFNE Docker images (explained here https://ipnet.xyz/2023/11/lfne-linux-for-network-engineers) I’ve created the GNS3 Appliances for easy import into GNS3.

The GNS3 Appliances can be downloaded here https://github.com/yotis1982/lfne and imported into GNS3.

Have fun!

LFNE – Linux For Network Engineers

Formerly known as PFNE – Python For Network Engineer, the images developed to be more than just for Python learning. My choice was to call the new one more generic and pick the Linux For Network Engineers (LFNE)

Linux images build with all tools need by network engineers to perform various tasks ranging from simple python script to automation and testing.
Below is the list of installed applications on LFNE images. Pull one and start experimenting.

I’m using two main distributions to build these images – Ubuntu and AlmaLinux – pick your favorite flavor. I picked AlmaLinux as is the closest distribution to now (almost) defunct Centos.

LFNE based on Ubuntu 22.04

Pull the image:
docker pull yotis/lfne:ubuntu-22.04
Use the image:
docker run -i -t yotis/lfne:ubuntu-22.04 /bin/bash
If used with Portainer don’t forget to activate the option for Console : Interactive & TTY

LFNE based on AlmaLinux 9.2

Pull the image:
docker pull yotis/lfne:almalinux-9.2
Use the image:
docker run -i -t yotis/lfne:almalinux-9.2 /bin/bash
If used with Portainer don’t forget to activate the option for Console : Interactive & TTY

Some of the installed packages:

Openssl
Net-tools (ifconfig…)
IPutils (ping, arping, traceroute…)
Socat
Host (DNS lookup tool)
Mtr (traceroute tool)
Telnet / SSH client
IProute2
IPerf (traffic generator)
TCPDump
Nmap
Python 2 (only on Ubuntu variant)
Python 3
Paramiko
Netmiko
Ansible
Pyntc
Napalm
Openssh Server

To use remote ssh connection to container

  • Enable it with “service ssh start”
  • Expose the desired port for the container (tcp/22 default)