Sitemap

Building Elastic SIEM with Docker

7 min readJul 4, 2025

--

Quick and easy way to setup Elastic under minutes

Zoom image will be displayed

Looking to spin up the Elastic Stack quickly for a demo, PoC, or just to explore its capabilities as a tech enthusiast?

This blog walks you through a fast and easy way to get started.

Prerequisites:

To get started, you’ll need the following:

✅ A Linux machine (tested on Ubuntu 22.04.3 LTS, but any recent Ubuntu version should work)
🐳 Docker and Docker Compose are installed and ready to use. docker-compose.yml and an .env file (both provided in this blog)
⚙️ Ensure the kernel setting vm.max_map_count is set to at least 262144:

sudo sysctl -w vm.max_map_count=262144

Configuration Steps

  • Creating Elasticsearch directories & files
cd /
mkdir elasticsearch
cd elasticsearch
mkdir {esdata01,config}
cd config
mkdir certs

Save the below as elasticsearch.yml under directory “/elasticsearch/config”

network.host: 0.0.0.0
  • Creating Kibana directories & files
cd /
mkdir kibana
cd kibana
mkdir {kibanadata,config}

Save the below as kibana.yml under directory “/kibana/config”

xpack.encryptedSavedObjects.encryptionKey: "596486711ea055ec4464ac7aecd756de9e6c"
server.host: "0.0.0.0"
telemetry.enabled: "true"
xpack.fleet.packages:
- name: fleet_server
version: latest
- name: system
version: latest
xpack.fleet.agentPolicies:
- name: Fleet-Server-Policy
id: fleet-server-policy
namespace: default
package_policies:
- name: fleet_server-1
package:
name: fleet_server

Make sure the elasticsearch and kibana directories (including their subfolders) have the correct ownership for Docker to function properly:

cd /
chown 1000:0 elasticsearch/ -R
chown 1000:0 kibana/ -R

This ensures the containers (which typically run as UID 1000) can read/write to the mounted volumes without permission issues.

The elasticsearch and kibana directories are created based on the volume mappings defined in the provided docker-compose.yml file.

Feel free to adjust the directory structure or paths as needed to fit your environment.

  • Docker-compose file(docker-compose.yml):
version: "3.8"

services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: ecp-elasticsearch-security-setup
volumes:
- /elasticsearch/config/certs:/usr/share/elasticsearch/config/certs:z
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: elasticsearch\n"\
" dns:\n"\
" - ecp-elasticsearch\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - 192.168.55.12\n"\
" - name: kibana\n"\
" dns:\n"\
" - ecp-kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - 192.168.55.12\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
cat config/certs/elasticsearch/elasticsearch.crt config/certs/ca/ca.crt > config/certs/elasticsearch/elasticsearch.chain.pem
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://ecp-elasticsearch:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://ecp-elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/elasticsearch/elasticsearch.crt ]"]
interval: 1s
timeout: 5s
retries: 120

elasticsearch:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
container_name: ecp-elasticsearch
restart: always
volumes:
- /elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /elasticsearch/config/certs:/usr/share/elasticsearch/config/certs
- /elasticsearch/esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=ecp-elasticsearch
- cluster.name=${CLUSTER_NAME}
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- discovery.type=single-node
- remote_cluster_server.enabled
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.chain.pem
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.http.ssl.client_authentication=optional
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key
- xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.client_authentication=optional
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120

kibana:
depends_on:
elasticsearch:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
container_name: ecp-kibana
restart: always
volumes:
- /kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:Z
- /elasticsearch/config/certs:/usr/share/kibana/config/certs:z
- /kibana/kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVER_NAME=ecp-kibana
- ELASTICSEARCH_HOSTS=${ELASTIC_HOSTS}
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_SSL_ENABLED=true
- SERVER_SSL_CERTIFICATE=config/certs/kibana/kibana.crt
- SERVER_SSL_KEY=config/certs/kibana/kibana.key
- SERVER_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -I -s --cacert config/certs/ca/ca.crt https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
ATTENTION!
#line 8,71,72,73,120,121,122: Ensure the directory path is as per your directory structure (the path to the left of the colon, anything to the right of colon should not be touched - just yaml things)
#line34: Update the IP as per you host IP
#line41 : Update the IP as per you host IP
  • Environment file(.env):
ELASTIC_PASSWORD=mediumblogpassword
KIBANA_PASSWORD=mediumblogpassword
STACK_VERSION=8.18.2
CLUSTER_NAME=elastic-cyberlab
LICENSE=basic
ES_PORT=9200
KIBANA_PORT=5601
FLEET_PORT=8220
MEM_LIMIT=16073741824
KB_MEM_LIMIT=8073741824
ELASTIC_HOSTS=https://192.168.55.12:9200
ATTENTION!
#line 1,2: Change the password
#line9&10: 16GB RAM for Elastic & 8GB RAM for Kibana (My OS is running on 48GB of RAM, using half of that for Elastic Stack)
#line11: Change to your host IP address

Install the stack:

Save the docker-compose.yml and .env file in the same directory, then run the following command to launch Elasticsearch and Kibana:

docker compose up -d

Kibana should now be accessible at port 5601 (http://192.168.55.12:5601).
Log in using the username elastic and the password you defined earlier in the .env file.

Zoom image will be displayed

Platform Tuning

  1. Go to Management → Fleet → Settings → Outputs
  2. Click Edit on the default output configuration
  3. Change the “Hosts” value to “ELASTIC_HOSTS” value from .env file
  4. Retrieve the Elasticsearch CA fingerprint either from your browser or directly via the CLI. For example, in Firefox, it appears as shown below:

and update the fingerprint value under “Elasticsearch CA trusted fingerprint (optional)”. Now your output config should look something like below,

Zoom image will be displayed

Now let’s onboard the Fleet Server to enable data ingestion:

  1. In Kibana, navigate to Agents → Add Fleet Server
  2. When the window opens, click on Advanced
  3. Ensure the default policy is selected — typically "Fleet-Server-Policy"
  4. Under “Add your Fleet Server host”, enter the host details

Since we’re running in standalone mode, use the same IP address as your current machine

5. Click “Add host”

6. Then, click “Generate Service Token”

At this point, your screen should look similar to the example below:

Zoom image will be displayed

Copy the last command and add an “— — insecure” flag to the end. We are using “— — insecure” since we are using self-signed certificate.

curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.18.2-linux-x86_64.tar.gz
tar xzvf elastic-agent-8.18.2-linux-x86_64.tar.gz
cd elastic-agent-8.18.2-linux-x86_64
sudo ./elastic-agent install \
--fleet-server-es=https://192.168.55.12:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE3NTE2MTQ5NDIyMzI6SlhFT1NuUldUOHlodEdKUFJFUkFRUQ \
--fleet-server-policy=fleet-server-policy \
--fleet-server-es-ca-trusted-fingerprint=001F509AE949785B99A5586CB616E1626C488E71D931271293771FFCDF08F32F \
--fleet-server-port=8220 \
--insecure

Copy and paste the generated command into your terminal using sudo, and follow the on-screen instructions. Once completed, the Fleet Server will be successfully installed and visible in Kibana under Management → Fleet → Agents.

Zoom image will be displayed

For the sake of simplicity, under “Agent Policies → Create agent Policy” create 2 policy, one for windows(Lab-Windows) and one for Linux(Lab-Linux) machines where you will install the agents.

For the sake of simplicity, navigate to Agent Policies → Create agent policy, and create two separate policies:

  • One for Windows machines (e.g., Lab-Windows)
  • One for Linux machines (e.g., Lab-Linux)

These policies will be used when enrolling agents on their respective operating systems.

While installing the agent, make sure to append the — insecure flag at the end of the enrollment command

You can now add the required integrations to each policy from the Management → Integrations panel.

To enable EDR capabilities, add the Elastic Defend integration to the desired policy (e.g., Lab-Windows). This will ensure all agents enrolled under that policy receive Elastic’s endpoint protection features.

Zoom image will be displayed

🔔 Note: If your instance doesn’t have internet connectivity to reach the Elastic package registry, you won’t see any integrations listed within the UI. While it’s possible to configure everything for air-gapped environments, that’s beyond the scope of this blog.

Happy integrating your data sources with Elastic!
Whether it’s for observability, security, or analytics — you’re now all set to explore the power of the Elastic Stack.

Thanks for following all along. I welcome your discussions and feedback on this topic. Feel free to reach me via email at “kaviarasan one one nine five at gmail dot com” or connect on LinkedIn.

References

Elastic Cloud

Elastic SIEM

Elastic support

Customer Stories

--

--

Kaviarasan Asokan
Kaviarasan Asokan

Written by Kaviarasan Asokan

Curious explorer of tech and life's mysteries. On Medium, I share insights, musings, and creative journeys.

No responses yet