This guide walks you through installing and configuring the ELK Stack (Elasticsearch, Logstash, Kibana, and Filebeat) using Docker Compose. It is fully updated for Elasticsearch 9.0.2 and explains the necessary changes for versions 8+ and above, including the required security setup and user permissions.
Prerequisites
Ensure you have the following installed on your system:
- Docker: Install Docker Engine or Docker Desktop.
- Docker Compose: Typically included with Docker Desktop; otherwise, install it separately
The folder structure for is as follows:
elk-lab/ |-- .env |-- docker-compose.yml |-- logstash/ | |-- pipeline/ | |-- logstash.conf |-- filebeat | |-- filebeat.yml
Start by creating your root folder structure
mkdir elk-docker && cd elk-docker mkdir -p logstash/pipeline mkdir filebeat
Step 1: Prepare Your .env
File
Create a .env
file in the root of your project directory:
ELASTIC_USER=myadmin ELASTIC_PASSWORD=mypassw0rd!
For simplicity in this lab setup, all components share the same password. Do not use this setup in production.
Step 2: docker-compose.yml
Create a docker-compose.yml
file:
services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2 container_name: elasticsearch environment: - node.name=elasticsearch - discovery.type=single-node - xpack.security.enabled=true - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} ports: - "9200:9200" volumes: - esdata:/usr/share/elasticsearch/data networks: - elk kibana: image: docker.elastic.co/kibana/kibana:9.0.2 container_name: kibana environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 - ELASTICSEARCH_USERNAME=${ELASTIC_USER} - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD} ports: - "5601:5601" depends_on: - elasticsearch networks: - elk logstash: image: docker.elastic.co/logstash/logstash:9.0.2 container_name: logstash environment: - xpack.monitoring.enabled=true - ELASTIC_USERNAME=${ELASTIC_USER} - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} ports: - "5044:5044" volumes: - ./logstash/pipeline:/usr/share/logstash/pipeline depends_on: - elasticsearch networks: - elk filebeat: image: docker.elastic.co/beats/filebeat:9.0.2 container_name: filebeat user: root environment: - ELASTICSEARCH_USERNAME=${ELASTIC_USER} - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD} volumes: - /var/lib/docker/containers:/var/lib/docker/containers:ro - /var/run/docker.sock:/var/run/docker.sock:ro - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro depends_on: - logstash networks: - elk command: ["--strict.perms=false"] volumes: esdata: networks: elk:
Step 3: Logstash Pipeline
Create the Logstash pipeline file at ./logstash/pipeline/logstash.conf
:
input { beats { port => 5044 } } output { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "myadmin" password => "mypassw0rd!" index => "logstash-%{+YYYY.MM.dd}" } }
Step 4: Filebeat Config
Create the Filebeat config at ./filebeat/filebeat.yml
with this content:
filebeat.inputs: - type: filestream id: my-container-logs enabled: true paths: - /var/lib/docker/containers/*/*.log parser: - docker output.logstash: hosts: ["logstash:5044"] username: "${ELASTIC_USER}" password: "${FILEBEAT_PASSWORD}" setup.kibana: host: "kibana:5601" username: "${ELASTIC_USER}" password: "${FILEBEAT_PASSWORD}" # Optional for debugging logging.level: info logging.to_files: false
Just a note: In this setup the Filebeat output to Logstash (not Elasticsearch) because in general I use ELK with netflow so I can work with more complex logs parsing.
If you chose to send Filebeat to Elasticsearch just adapt the fiebeat.yml file becasue the docker-compose.yml is already ready.
Step 5: Start Elasticsearch Container Before Creating Users and Roles (Temporary Step)
Before creating the custom user and roles, you must start at least the Elasticsearch container once to enable the security APIs.
Run this command in your project directory:
docker compose up -d elasticsearch
If this is the first time starting the container, Docker will first download the elasticsearch image.
Wait a minute for Elasticsearch to fully start and be ready to accept API calls. You can check logs with:
docker logs -f elasticsearch
Only after Elasticsearch is up and running, proceed with creating roles and users in the next step.
Step 6: Create the Custom User and Roles
As of version 8+, Kibana no longer allows login with the elastic
user for Fleet and integrations. Instead, you must create a new superuser with additional privileges.
- Create a new role that allows access to restricted indices:
curl -X POST http://localhost:9200/_security/role/kibana_system_access \ -u elastic:mypassw0rd! \ -H "Content-Type: application/json" \ -d '{ "cluster": ["all"], "indices": [ { "names": [".kibana*", ".apm*"], "privileges": ["all"], "allow_restricted_indices": true } ] }'
2. Create a new superuser myadmin
:
curl -X POST http://localhost:9200/_security/user/myadmin \ -u elastic:mypassw0rd! \ -H "Content-Type: application/json" \ -d '{ "password": "mypassw0rd!", "roles": ["superuser", "kibana_system_access"], "full_name": "Lab Admin" }'
After both commands usually there is an output which contains {"created":true}
among other details. This is an indication that the role and user were created successfully.
Step 7: Launch the Full Stack
Now that the users and roles are created, start all services:
docker compose up -d
Access Kibana at:
http://localhost:5601
For lab topics, use the IP address of your Docker host:
http://docker_host_ip:5601
Use credentials:
- Username:
myadmin
- Password:
mypasw00rd!
Notes
- Why not use the
elastic
user? Since v8+, theelastic
user is intended for initial setup and API use only — not for logging into Kibana. - Why grant access to restricted indices? Kibana uses internal system indices like
.kibana_*
and.apm_*
that are restricted by default. Your user must explicitly have permissions to manage these indices. - Why is security mandatory? Features such as Fleet, Integrations, and Kibana dashboards require security to be enabled, meaning all services must use authenticated users.