0% found this document useful (0 votes)
2K views15 pages

Deployment Wazuh With High Availability Like A Production

This document provides instructions for deploying Wazuh with high availability in a production environment. It outlines the necessary pre-requisites including required ports, minimum resource requirements for components like the Wazuh manager, indexer, and dashboard. It then provides step-by-step instructions for setting up the bastion host and configuring firewall rules, SELinux, DNS, and load balancing services. The goal is to deploy a clustered and highly available Wazuh installation that can scale to support security monitoring for multiple systems and devices.

Uploaded by

jihad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views15 pages

Deployment Wazuh With High Availability Like A Production

This document provides instructions for deploying Wazuh with high availability in a production environment. It outlines the necessary pre-requisites including required ports, minimum resource requirements for components like the Wazuh manager, indexer, and dashboard. It then provides step-by-step instructions for setting up the bastion host and configuring firewall rules, SELinux, DNS, and load balancing services. The goal is to deploy a clustered and highly available Wazuh installation that can scale to support security monitoring for multiple systems and devices.

Uploaded by

jihad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Deployment Wazuh With High Availability Like A

Production
Date Added @January 24, 2023 0:00

Link Reference https://documentation.wazuh.com/current/getting-started/index.html

Notes Based on Wazuh Official Documentation

Person

A. Pre-requisites
1. Ports
Components Port Protocol Description

Wazuh Manager 1514 TCP Agent connection service

1514 UDP Agent connection service (disabled by default)

1515 TCP Agent enrollment service

1516 TCP Wazuh cluster daemon

514 UDP (default) Wazuh Syslog collector (disabled by default)

514 TCP (optional) Wazuh Syslog collector (disabled by default)

55000 TCP Wazuh server RESTful API

Wazuh Indexer 9200 TCP Wazuh indexer RESTful API

9300-9400 TCP Wazuh indexer cluster communication

Wazuh Dashboard 443 TCP Wazuh web user interface

Bastion 53 UDP DNS server

1514 TCP Wazuh worker API load balancer

1515 TCP Wazuh master API load balancer

2. Minimum VM Cluster Resources


Alerts Per
Components Counts Description RAM CPU Disk (90 Days)
Second

Act as a Load Balancer,


Bastion 1 DNS server, and DHCPD 8 4 25 GB -
server

Act as master and


Wazuh Manager process the threat and
1 2 2 0.2 GB/VM 0.5
Master integration with any other
tools

Act as worker and


Wazuh Manager process the threat and
2 2 2 0.2 GB/VM 0.5
Worker integration with any other
tools

This component indexes


and stores alerts
Wazuh Indexer 2 4 2 7.4 GB/VM 0.5
generated by the Wazuh
server

Visualization graph, log


Wazuh
1 detail, and UI based 4 2 25 GB -
Dashboard
management for Wazuh

Deployment Wazuh With High Availability Like A Production 1


💡 OS Information: Alma Linux 8 which has 1:1 compatibility binary to Red Hat Enterprise Linux. If the sections Disk tell
x.x GB/VM it is mean you need estimate x.x GB for every agents connected to Wazuh cluster. Example if you have
10 devices connected to wazuh, you will need to calculate with the following rule: total * x.x = 10 * 0.2 = 2 GB storage
for 90 days

💡 For the best practice, based on scalability… This docs will recomend you to use LVM ( Logical Volume Manager ) to
manage and scale the disk if disk full filled or you don’t set log retention. And actually, this docs is recomend you to
separate wazuh logs to another partition, for better scalability. Example you can move and make new mount point of
/var/ossec/logs to a new LVM partition

💡 This requirements is just minimum deployment for clustered/multi node wazuh in production use. If you want to reach
best performance you have to define specific configuration and manual tunning per VM

3. Wazuh indexer Details


wazuh‐alerts

wazuh‐archives

wazuh‐monitoring

wazuh‐statistics

4. Wazuh Manager Details


Agent enrollment service

Agent connection service

Analysis engine

Wazuh RESTful API

Wazuh cluster daemon

Filebeat

5. Wazuh Dashboard Details


Data visualization and analysis

Agents monitoring and configuration

Platform management

Developer tools

6. Wazuh Agent Details


Log collector

Command execution

File integrity monitoring (FIM)

Security configuration assessment (SCA)

System inventory

Malware detection

Active response

Container security monitoring

Cloud security monitoring

Deployment Wazuh With High Availability Like A Production 2


7. Simple Notes

💡 In production, this docs does not recomend you to systemctl restart <service_name> or service
on several critical service which has impact on downtime and all avarage performance
<service_name> restart

Example:
1. HAProxy—> haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
2. Bind9/Named—> rndc reload
3. Network and hostname —> nmcli con reload
4. Nginx —> nginx -s reload
5. Httpd/Apache2 —> apachectl -k graceful , httpd -k graceful , apache2ctl -k graceful

If you don’t know how to restart a critical component of your production (server farm) please only use systemctl reload

<servicename> or service <service_name> reload , reloading is better than restarting a service

💡 If your production environment has resistrict network access which is need tunneling and proxy whitelist to open
network, you have to set http_proxy= , https_proxy= , and no_proxy in /etc/environment. If in your server not have a
nmap, telnet, and wget… you can alternatively use tracepath (if hasn’t nmap), curl -kv telnet:// (if hasn’t telnet/net-
tools), and curl -LO (if hasn’t wget). Tracepath and curl will help you when you are troubleshooting error and issues
based on log and response given.

B. Step By Step
1. Setup Bastion Host
yum install chrony bind bind-utils net-tools haproxy dhcp* rsync bash-completion

systemctl enable --now chronyd named haproxy dhcpd firewalld

hostnamectl set-hostname bastion.fqdn

firewall-cmd --add-port=53/udp --add-port=1515/tcp --add-port=1514/tcp --permanent

firewall-cmd --reload

setenforce 0

timedatectl set-ntp true

vi /etc/selinux/config

# This file controls the state of SELinux on the system.


# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

vi /etc/named.conf

options {
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };

Deployment Wazuh With High Availability Like A Production 3


forward first;
forwarders { 8.8.8.8; 1.1.1.1; };
/*
- If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
- If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
- If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legitimate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
allow-transfer { any; };
recursion yes;

dnssec-enable no;
dnssec-validation no;

managed-keys-directory "/var/named/dynamic";

pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";

/* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
include "/etc/crypto-policies/back-ends/bind.config";
};

logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};

zone "." IN {
type hint;
file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "fqdn" IN {
type master;
file "forward.dns";
allow-update { none; };
};

zone "2.19.10.in-addr.arpa" IN {
type master;
file "reverse.dns";
allow-update { none; };
};

vi /var/named/forward.dns

@ IN SOA fqdn. root.fqdn. (


2019100301 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)

; ### Root domain for FQDN ###


@ IN NS fqdn.
@ IN A 10.19.2.98
@ IN MX 10 fqdn.
bastion.fqdn. IN CNAME fqdn.

; ### API master and worker Wazuh Manager ###


dashboard IN A 10.19.2.99
master IN A 10.19.2.100
worker0 IN A 10.19.2.101
worker1 IN A 10.19.2.102
indexer0 IN A 10.19.2.103
indexer1 IN A 10.19.2.104

vi /var/named/reverse.dns

Deployment Wazuh With High Availability Like A Production 4


$TTL 86400
@ IN SOA fqdn. root.fqdn. (
2019100301 ;Serial
3600 ;Refresh
1800 ;Retry
604800 ;Expire
86400 ;Minimum TTL
)
; ### Root domain for FQDN ###
@ IN NS fqdn.
98 IN PTR fqdn.
98 IN PTR bastion.fqdn.

; ### OpenShift Cluster Node ###


99 IN PTR dashboard.fqdn.
100 IN PTR master.fqdn.
101 IN PTR worker0.fqdn.
102 IN PTR worker1.fqdn.
103 IN PTR indexer0.fqdn.
104 IN PTR indexer1.fqdn.

rndc reload

vi /etc/haproxy/haproxy.cfg

global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

defaults
mode http
log global
# option httplog
option tcplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

listen api-master-1515
bind *:1515
mode tcp
balance source
server master master.fqdn:1515 check inter 1s

listen api-worker-1514
bind *:1514
mode tcp
balance source
server worker0 worker0.fqdn:1514 check inter 1s
server worker1 worker1.fqdn:1514 check inter 1s

💡 This HAProxy settings is using source algorithm or you can customize to round robbin. We will do load balancing in
layer 4 OSI (TCP). Or if you want to load balancing only layer 7 OSI (HTTP), just set mode http but it’s not the best
practice. Please ensure in ossec.conf or agent.conf (centralized conf) is to set Disable use_source_ip for authd
registration process

haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

vi /etc/dhcpd/dhcp/dhcpd.conf

subnet 10.19.2.96 netmask 255.255.255.224 {


default-lease-time 94680000;

Deployment Wazuh With High Availability Like A Production 5


max-lease-time 157680000;
option subnet-mask 255.255.255.224;
option broadcast-address 10.19.2.127;
option routers 10.19.2.97;
option domain-name-servers 10.19.2.98;
option domain-search "fqdn";

host bastion {
option host-name "bastion.fqdn";
hardware ethernet 52:54:00:0a:80:7d;
fixed-address 10.19.2.98;
}
host dashboard {
option host-name "dashboard.fqdn";
hardware ethernet 52:54:00:5a:97:c3;
fixed-address 10.19.2.99;
}
host master {
option host-name "master.fqdn";
hardware ethernet 52:54:00:6f:6b:b3;
fixed-address 10.19.2.100;
}
host worker0 {
option host-name "worker0.fqdn";
hardware ethernet 52:54:00:bd:b1:fd;
fixed-address 10.19.2.101;
}
host worker1 {
option host-name "worker1.fqdn";
hardware ethernet 52:54:00:bf:4d:e2;
fixed-address 10.19.2.102;
}
host indexer0 {
option host-name "indexer0.fqdn";
hardware ethernet 52:54:00:11:46:ff;
fixed-address 10.19.2.103;
}
host indexer1 {
option host-name "indexer1.fqdn";
hardware ethernet 52:54:00:10:cf:24;
fixed-address 10.19.2.104;
}

💡 Please replace the example configuration with your own ip address, mac address, subnet, and gateway

systemctl reload dhcpd

curl -LO https://packages.wazuh.com/4.3/wazuh-certs-tool.sh

curl -LO https://packages.wazuh.com/4.3/config.yml

vi config.yml

nodes:
# Wazuh indexer nodes
indexer:
- name: indexer0
ip: indexer-node-ip
- name: indexer1
ip: indexer-node-ip

# Wazuh server nodes


# If there is more than one Wazuh server
# node, each one must have a node_type
server:
- name: master
ip: wazuh-manager-ip
node_type: master
- name: worker0
ip: wazuh-manager-ip
node_type: worker
- name: worker1
ip: wazuh-manager-ip
node_type: worker

# Wazuh dashboard nodes


dashboard:
- name: dashboard
ip: dashboard-node-ip

Deployment Wazuh With High Availability Like A Production 6


for ip in all_ip_in_wazuh_cluster ; do ssh root@ip -C yum install chrony bind-utils net-tools rsync bash-completion

bash ./wazuh-certs-tool.sh -A

💡 This yaml is used for playbook with wazuh-certs-tool.sh to create CA certificate

for ip in all_ip_in_wazuh_cluster ; do mkdir -p /root/wazuh-certificates

for ip in all_ip_in_wazuh_cluster ; do rsync -avhP wazuh-certificates/ root@$ip:/root/wazuh-certificates ; done

💡 This is for looping to make automation of scp certificates from a server to another server. You can use scp instead of
rsync. But in this docs we’ll use rsync because rsync support transfer partial upload and download + you can resume
scp if in the middle of process if was something interupted. If you want to use scp, the sample command: scp
folder_name user@ip:target_destination

The example of usage is:


for ip in 10.19.2.{99..104} ; do mkdir /root/wazuh-certificates ; done
for ip in 10.19.2.{99..104} ; do rsync -avhP wazuh-certificates root@$ip:/root ; done

2. Setup Indexer Node


yum install coreutils

systemctl enable --now chronyd firewalld

firewall-cmd --add-port=9200/tcp --add-port=9300-9400/tcp --permanent

firewall-cmd --reload

setenforce 0

timedatectl set-ntp true

vi /etc/selinux/config

# This file controls the state of SELinux on the system.


# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH

echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo

yum -y install wazuh-indexer

vi /etc/wazuh-indexer/opensearch.yml

network.host: "0.0.0.0"
node.name: "indexer0"
cluster.initial_master_nodes:
- "indexer0"
- "indexer1"
cluster.name: "wazuh-cluster"
discovery.seed_hosts:
- "10.19.2.103"
- "10.19.2.104"
node.max_local_storage_nodes: "3"
path.data: /var/lib/wazuh-indexer
path.logs: /var/log/wazuh-indexer

Deployment Wazuh With High Availability Like A Production 7


plugins.security.ssl.http.pemcert_filepath: /etc/wazuh-indexer/certs/indexer.pem
plugins.security.ssl.http.pemkey_filepath: /etc/wazuh-indexer/certs/indexer-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.transport.pemcert_filepath: /etc/wazuh-indexer/certs/indexer.pem
plugins.security.ssl.transport.pemkey_filepath: /etc/wazuh-indexer/certs/indexer-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/wazuh-indexer/certs/root-ca.pem
plugins.security.ssl.http.enabled: true
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false

plugins.security.authcz.admin_dn:
- "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.nodes_dn:
- "CN=indexer0,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=indexer1,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.restapi.roles_enabled:
- "all_access"
- "security_rest_api_access"

plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*"

### Option to allow Filebeat-oss 7.10.2 to work ###


compatibility.override_main_response_version: true

💡 Please replace all these configuration above with you cluster details. And do not forget to replace
"indexer0" with your current VM hostname
node.name:

mkdir -p /etc/wazuh-indexer/certs

export NODE_NAME=indexer-node-name

💡 Replace indexer-node-name with your current indexer VM hostname

cp -a /root/wazuh-certificates/$NODE_NAME.pem /root/wazuh-certificates/$NODE_NAME-key.pem /root/wazuh-


certificates/admin.pem /root/wazuh-certificates/admin-key.pem /root/wazuh-certificates/root-ca.pem /etc/wazuh-
indexer/certs/

mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem

mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem

chmod 500 /etc/wazuh-indexer/certs && chmod 400 /etc/wazuh-indexer/certs/*

chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs

💡 Repeat all the command above to another vm in indexer node, before we apply first initialization

for ip in all_indexer_ip ; do ssh root@$ip -C systemctl daemon-reload ; done

for ip in all_indexer_ip ; do ssh root@$ip -C systemctl enable --now wazuh-indexer ; done

💡 We will use for looping for enable and starting wazuh indexer service in our indexer cluster

Example:
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl enable --now wazuh-indexer ; done

/usr/share/wazuh-indexer/bin/indexer-security-init.sh

Deployment Wazuh With High Availability Like A Production 8


💡 The command /usr/share/wazuh-indexer/bin/indexer-security-init.sh in one of indexer VM, this docs is recomend you
to running this script initialization on vm1 wazuh indexer

curl -k -u admin:admin https://wazuh_indexer_ip:9200

💡 The example output of curl:


{
"name" : "indexer0",
"cluster_name" : "wazuh-cluster",
"cluster_uuid" : "KOtBF3XoTGiXr-vOACWFBw",
"version" : {
"number" : "7.10.2",
"build_type" : "rpm",
"build_hash" : "e505b10357c03ae8d26d675172402f2f2144ef0f",
"build_date" : "2022-01-14T03:38:06.881862Z",
"build_snapshot" : false,
"lucene_version" : "8.10.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}

3. Setup Manager Node


3.1. Setup all Manager VM
systemctl enable --now chronyd firewalld

firewall-cmd --add-port=55000/tcp --add-port=1516/tcp --add-port=1515/tcp --add-port=1514/tcp --add-port=514/tcp --


permanent

firewall-cmd --reload

setenforce 0

timedatectl set-ntp true

vi /etc/selinux/config

# This file controls the state of SELinux on the system.


# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH

echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo

yum -y install wazuh-manager filebeat

for ip in all_manager_ip ; do ssh root@$ip -C systemctl daemon-reload ; done

for ip in all_manager_ip ; do ssh root@$ip -C systemctl enable --now wazuh-manager ; done

Deployment Wazuh With High Availability Like A Production 9


💡 We will use for looping for enable and starting wazuh manager service in our manager cluster

Example:
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl enable --now wazuh-manager ; done

curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/4.3/tpl/wazuh/filebeat/filebeat.yml

vi /etc/filebeat/filebeat.yml

#Edit this line like this


hosts: ["ip_indexer:9200"]

💡 Example if we have clustered wazuh indexer you can add more indexer vm like this

hosts: ["10.19.2.103:9200","10.19.2.104:9200"]

filebeat keystore create

echo admin | filebeat keystore add username --stdin --force

echo admin | filebeat keystore add password --stdin --force

curl -so /etc/filebeat/wazuh-template.json


https://raw.githubusercontent.com/wazuh/wazuh/4.3/extensions/elasticsearch/7.x/wazuh-template.json

chmod go+r /etc/filebeat/wazuh-template.json

curl -s https://packages.wazuh.com/4.x/filebeat/wazuh-filebeat-0.2.tar.gz | tar -xvz -C /usr/share/filebeat/module

export NODE_NAME=manager-node-name

💡 Replace manager-node-name with your current indexer VM hostname

mkdir /etc/filebeat/certs

cp -a /root/wazuh-certificates/$NODE_NAME.pem /root/wazuh-certificates/$NODE_NAME-key.pem /root/wazuh-


certificates/root-ca.pem /etc/filebeat/certs/

mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem

mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem

chmod 500 /etc/filebeat/certs

chmod 400 /etc/filebeat/certs/*

chown -R root:root /etc/filebeat/certs

💡 Repeat all the command above to another vm in manager node, before we apply first initialization

for ip in all_manager_ip ; do ssh root@$ip -C systemctl daemon-reload ; done

for ip in all_manager_ip ; do ssh root@$ip -C systemctl enable --now filebeat ; done

💡 We will use for looping for enable and starting filebeat service in our manager cluster

Example:
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl enable --now filebeat ; done

Deployment Wazuh With High Availability Like A Production 10


filebeat test output

3.2. Configuring Master VM


openssl rand -hex 16 > cluster-key.id

vi /var/ossec/etc/ossec.conf

<cluster>
<name>wazuh</name>
<node_name>master</node_name>
<node_type>master</node_type>
<key>c98b62a9b6169ac5f67dae55ae4a9088</key>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>wazuh-master-address</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>

💡 Find <Cluster> in /var/ossec/etc/ossec.conf and edit it with your detail of your cluster
Please fill in <key></key> with hex stored in cluster-key.id . Please also replace <node_name> with you master
hostname

💡 As i mention in this tutorial please ensure the use_source_ip from <auth> section was disabled like this:
<use_source_ip>no</use_source_ip>. This is to prevent any issues in High Availablity mode

systemctl restart wazuh-manager

3.3. Configuring Worker VM


vi script-key-id.sh

openssl rand -hex 16 > cluster-key.id

vi /var/ossec/etc/ossec.conf

<cluster>
<name>wazuh</name>
<node_name>worker0</node_name>
<node_type>worker</node_type>
<key>c98b62a9b6169ac5f67dae55ae4a9088</key>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>wazuh-master-address</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>

💡 Find <cluster> in /var/ossec/etc/ossec.conf and edit it with your detail of your cluster
Please fill in <key></key> with hex stored in cluster-key.id . Please also replace <node_name> with you worker
hostname

💡 As i mention in this tutorial please ensure the use_source_ip from <auth> section was disabled like this:
<use_source_ip>no</use_source_ip>. This is to prevent any issues in High Availablity mode

systemctl restart wazuh-manager

Deployment Wazuh With High Availability Like A Production 11


/var/ossec/bin/cluster_control -l

💡 If the deployment of Wazuh Manager node was successfull, this is the following example:

NAME TYPE VERSION ADDRESS


master master 4.3.10 10.19.2.100
worker1 worker 4.3.10 10.19.2.102
worker0 worker 4.3.10 10.19.2.101

4. Setup OpenSearch Dashboard


systemctl enable --now chronyd firewalld

firewall-cmd --add-port=80/tcp --add-port=443/tcp --permanent

firewall-cmd --reload

setenforce 0

timedatectl set-ntp true

vi /etc/selinux/config

# This file controls the state of SELinux on the system.


# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

yum install libcap

rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH

echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo

yum -y install wazuh-dashboard

vi /etc/wazuh-dashboard/opensearch_dashboards.yml

server.host: 0.0.0.0
server.port: 443
opensearch.hosts: ["https://10.19.1.103:9200", "https://10.19.2.104:9200"]
opensearch.ssl.verificationMode: certificate
#opensearch.username: admin
#opensearch.password: admin
opensearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opensearch_security.multitenancy.enabled: false
opensearch_security.readonly_mode.roles: ["kibana_read_only"]
server.ssl.enabled: true
server.ssl.key: "/etc/wazuh-dashboard/certs/dashboard-key.pem"
server.ssl.certificate: "/etc/wazuh-dashboard/certs/dashboard.pem"
opensearch.ssl.certificateAuthorities: ["/etc/wazuh-dashboard/certs/root-ca.pem"]
uiSettings.overrides.defaultRoute: /app/wazuh

💡 By default opensearch.username and opensearch.password is dynamicly allocated. But you canreplace


opensearch.hosts as a static parameter and value with your ip address and port of indexer node, but is not
recomended

export NODE_NAME=dashboard-hostname

Deployment Wazuh With High Availability Like A Production 12


mkdir /etc/wazuh-dashboard/certs

cp -a /root/wazuh-certificates/$NODE_NAME.pem /root/wazuh-certificates/$NODE_NAME-key.pem /root/wazuh-


certificates/root-ca.pem /etc/wazuh-dashboard/certs/

systemctl daemon-reload

systemctl enable --now wazuh-dashboard

/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml

hosts:
- default:
url: https://10.19.2.100
port: 55000
username: wazuh-wui
password: wazuh-wui
run_as: false

💡 Find the line above and replace url with your Wazuh manager master ip address

5. Securing All User Password


5.1. Securing Indexer
/usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --change-all

💡 Please run the command above in indexer node. because this command is how to auto generate all of user and
password from serveral services and components of Wazuh. Please remember and store the password in the safe
location , please do not forget it!!!

💡 Example Output:

26/01/2023 23:16:41 INFO: Wazuh API admin credentials not provided, Wazuh API passwords not changed.
26/01/2023 23:17:06 INFO: The password for user admin is XyazwW9Pm*a38upU63MXU5zry5?+ocgc
26/01/2023 23:17:06 INFO: The password for user kibanaserver is TkjKeq0AcZKt1PES.Da3n9l6uDWDF2QQ
26/01/2023 23:17:06 INFO: The password for user kibanaro is oJe7sXjEXKy7JTN.q4PVa8FvaJr26Dom
26/01/2023 23:17:06 INFO: The password for user logstash is iWHMUei96anP0ZI7?adF?FyDWZ74hlDw
26/01/2023 23:17:06 INFO: The password for user readall is d?0TJ0+DOY4qp2jpDJ9AGXwHg.CorbVC
26/01/2023 23:17:06 INFO: The password for user snapshotrestore is blzS+*kzCffxpNjuwhKgWwcW4tnu96Eq
26/01/2023 23:17:06 WARNING: Wazuh indexer passwords changed. Remember to update the password in the
Wazuh dashboard and Filebeat nodes if necessary, and restart the services.

for ip in all_Indexer_ip ; do ssh root@$ip -C systemctl restart wazuh-indexer ; done

💡 We will use for looping for restarting service wazuh indexer service in our indexer cluster after changing password

Example:
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl restart wazuh-indexer ; done

5.2. Securing Manager


curl -sO https://packages.wazuh.com/4.3/wazuh-passwords-tool.sh

bash wazuh-passwords-tool.sh --change-all --admin-user wazuh --admin-password wazuh

Deployment Wazuh With High Availability Like A Production 13


💡 Please run the command above in manager master node. because this command is how to
and password from serveral services and components of Wazuh. Please
auto generate all of user
remember and store the password in the safe

location , please do not forget it!!!

💡 Example Output:

26/01/2023 23:18:25 INFO: The password for Wazuh API user wazuh is vZIZKXsed06QO?x.B8fG1YYvIR.7pi2
26/01/2023 23:18:26 INFO: The password for Wazuh API user wazuh-wui is
zRvqrLt1QDH0B4+X6MoaUWiXHBGpDLR

5.3. Changing The Configuration In Dashboard & Manager


5.3.1 In all Manager Nodes

for ip in all_ip_indexer ; do ssh root@$ip -C 'echo <admin_password> | filebeat keystore add password --stdin --force' ;
done

for ip in in all_ip_indexer ; do ssh root@$ip -C systemctl restart filebeat ; done

💡 We will use for looping for changing password and restart filbeat service in indexer nodes

Example:

for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C 'echo XyazwW9Pm*a38upU63MXU5zry5?+ocgc | filebeat keystore


add password --stdin --force' ; done

for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl restart filebeat ; done

5.3.1 In Dashboard Node

echo <kibanaserver-password> | /usr/share/wazuh-dashboard/bin/opensearch-dashboards-keystore --allow-root add -f --


stdin opensearch.password

💡 Example:
echo TkjKeq0AcZKt1PES.Da3n9l6uDWDF2QQ | /usr/share/wazuh-dashboard/bin/opensearch-dashboards-keystore
--allow-root add -f --stdin opensearch.password

vi /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml

hosts:
- default:
url: https://10.19.2.100
port: 55000
username: wazuh-wui
password: AurnkgmG1nW4cplVmBTugmyP.?nvrESV
run_as: false

💡 Make sure /etc/wazuh-dashboard/opensearch_dashboards.yml hasn’t wrong credential. If by default value is:


#opensearch.password: admin (commented) or you don’t find line #opensearch.password: , you don’t have to edit anything

because this configuration is using dynamic allocated keystore. But, in other way you can manually set the password
static like: opensearch.password: XyazwW9Pm*a38upU63MXU5zry5?+ocgc

systemctl restart wazuh-dashboard

6. Common Troubleshooting

Deployment Wazuh With High Availability Like A Production 14


curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/wazuh-alerts-* -u <wazuh_indexer_user>:
<wazuh_indexer_password> -k

curl -k -X GET "https://<api_url>:55000/" -H "Authorization: Bearer $(curl -u <api_user>:<api_password> -k -X GET


'https://<api_url>:55000/security/user/authenticate?raw=true')”

filebeat test output

curl https://<WAZUH_INDEXER_IP>:9200/.kibana*/_mapping/field/type?pretty -u <wazuh_indexer_user>:


<wazuh_indexer_password> -k

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn”

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"

cat /var/ossec/logs/ossec.log | grep -i -E "error|warn"

journalctl -u wazuh-dashboard

cat /usr/share/wazuh-dashboard/data/wazuh/logs/wazuhapp.log | grep -i -E "error|warn"

Deployment Wazuh With High Availability Like A Production 15

You might also like