Deployment Wazuh With High Availability Like A Production
Deployment Wazuh With High Availability Like A Production
Production
Date Added @January 24, 2023 0:00
Person
A. Pre-requisites
1. Ports
Components Port Protocol Description
💡 For the best practice, based on scalability… This docs will recomend you to use LVM ( Logical Volume Manager ) to
manage and scale the disk if disk full filled or you don’t set log retention. And actually, this docs is recomend you to
separate wazuh logs to another partition, for better scalability. Example you can move and make new mount point of
/var/ossec/logs to a new LVM partition
💡 This requirements is just minimum deployment for clustered/multi node wazuh in production use. If you want to reach
best performance you have to define specific configuration and manual tunning per VM
wazuh‐archives
wazuh‐monitoring
wazuh‐statistics
Analysis engine
Filebeat
Platform management
Developer tools
Command execution
System inventory
Malware detection
Active response
💡 In production, this docs does not recomend you to systemctl restart <service_name> or service
on several critical service which has impact on downtime and all avarage performance
<service_name> restart
Example:
1. HAProxy—> haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
2. Bind9/Named—> rndc reload
3. Network and hostname —> nmcli con reload
4. Nginx —> nginx -s reload
5. Httpd/Apache2 —> apachectl -k graceful , httpd -k graceful , apache2ctl -k graceful
If you don’t know how to restart a critical component of your production (server farm) please only use systemctl reload
💡 If your production environment has resistrict network access which is need tunneling and proxy whitelist to open
network, you have to set http_proxy= , https_proxy= , and no_proxy in /etc/environment. If in your server not have a
nmap, telnet, and wget… you can alternatively use tracepath (if hasn’t nmap), curl -kv telnet:// (if hasn’t telnet/net-
tools), and curl -LO (if hasn’t wget). Tracepath and curl will help you when you are troubleshooting error and issues
based on log and response given.
B. Step By Step
1. Setup Bastion Host
yum install chrony bind bind-utils net-tools haproxy dhcp* rsync bash-completion
firewall-cmd --reload
setenforce 0
vi /etc/selinux/config
vi /etc/named.conf
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
dnssec-enable no;
dnssec-validation no;
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
/* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
include "/etc/crypto-policies/back-ends/bind.config";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
zone "fqdn" IN {
type master;
file "forward.dns";
allow-update { none; };
};
zone "2.19.10.in-addr.arpa" IN {
type master;
file "reverse.dns";
allow-update { none; };
};
vi /var/named/forward.dns
vi /var/named/reverse.dns
rndc reload
vi /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode http
log global
# option httplog
option tcplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-master-1515
bind *:1515
mode tcp
balance source
server master master.fqdn:1515 check inter 1s
listen api-worker-1514
bind *:1514
mode tcp
balance source
server worker0 worker0.fqdn:1514 check inter 1s
server worker1 worker1.fqdn:1514 check inter 1s
💡 This HAProxy settings is using source algorithm or you can customize to round robbin. We will do load balancing in
layer 4 OSI (TCP). Or if you want to load balancing only layer 7 OSI (HTTP), just set mode http but it’s not the best
practice. Please ensure in ossec.conf or agent.conf (centralized conf) is to set Disable use_source_ip for authd
registration process
vi /etc/dhcpd/dhcp/dhcpd.conf
host bastion {
option host-name "bastion.fqdn";
hardware ethernet 52:54:00:0a:80:7d;
fixed-address 10.19.2.98;
}
host dashboard {
option host-name "dashboard.fqdn";
hardware ethernet 52:54:00:5a:97:c3;
fixed-address 10.19.2.99;
}
host master {
option host-name "master.fqdn";
hardware ethernet 52:54:00:6f:6b:b3;
fixed-address 10.19.2.100;
}
host worker0 {
option host-name "worker0.fqdn";
hardware ethernet 52:54:00:bd:b1:fd;
fixed-address 10.19.2.101;
}
host worker1 {
option host-name "worker1.fqdn";
hardware ethernet 52:54:00:bf:4d:e2;
fixed-address 10.19.2.102;
}
host indexer0 {
option host-name "indexer0.fqdn";
hardware ethernet 52:54:00:11:46:ff;
fixed-address 10.19.2.103;
}
host indexer1 {
option host-name "indexer1.fqdn";
hardware ethernet 52:54:00:10:cf:24;
fixed-address 10.19.2.104;
}
💡 Please replace the example configuration with your own ip address, mac address, subnet, and gateway
vi config.yml
nodes:
# Wazuh indexer nodes
indexer:
- name: indexer0
ip: indexer-node-ip
- name: indexer1
ip: indexer-node-ip
bash ./wazuh-certs-tool.sh -A
💡 This is for looping to make automation of scp certificates from a server to another server. You can use scp instead of
rsync. But in this docs we’ll use rsync because rsync support transfer partial upload and download + you can resume
scp if in the middle of process if was something interupted. If you want to use scp, the sample command: scp
folder_name user@ip:target_destination
firewall-cmd --reload
setenforce 0
vi /etc/selinux/config
echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo
vi /etc/wazuh-indexer/opensearch.yml
network.host: "0.0.0.0"
node.name: "indexer0"
cluster.initial_master_nodes:
- "indexer0"
- "indexer1"
cluster.name: "wazuh-cluster"
discovery.seed_hosts:
- "10.19.2.103"
- "10.19.2.104"
node.max_local_storage_nodes: "3"
path.data: /var/lib/wazuh-indexer
path.logs: /var/log/wazuh-indexer
plugins.security.authcz.admin_dn:
- "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.nodes_dn:
- "CN=indexer0,OU=Wazuh,O=Wazuh,L=California,C=US"
- "CN=indexer1,OU=Wazuh,O=Wazuh,L=California,C=US"
plugins.security.restapi.roles_enabled:
- "all_access"
- "security_rest_api_access"
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*"
💡 Please replace all these configuration above with you cluster details. And do not forget to replace
"indexer0" with your current VM hostname
node.name:
mkdir -p /etc/wazuh-indexer/certs
export NODE_NAME=indexer-node-name
mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
💡 Repeat all the command above to another vm in indexer node, before we apply first initialization
💡 We will use for looping for enable and starting wazuh indexer service in our indexer cluster
Example:
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl enable --now wazuh-indexer ; done
/usr/share/wazuh-indexer/bin/indexer-security-init.sh
firewall-cmd --reload
setenforce 0
vi /etc/selinux/config
echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo
Example:
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl enable --now wazuh-manager ; done
vi /etc/filebeat/filebeat.yml
💡 Example if we have clustered wazuh indexer you can add more indexer vm like this
hosts: ["10.19.2.103:9200","10.19.2.104:9200"]
export NODE_NAME=manager-node-name
mkdir /etc/filebeat/certs
mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem
mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem
💡 Repeat all the command above to another vm in manager node, before we apply first initialization
💡 We will use for looping for enable and starting filebeat service in our manager cluster
Example:
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl daemon-reload ; done
for ip in 10.19.2.10{0..2} ; do ssh root@$ip -C systemctl enable --now filebeat ; done
vi /var/ossec/etc/ossec.conf
<cluster>
<name>wazuh</name>
<node_name>master</node_name>
<node_type>master</node_type>
<key>c98b62a9b6169ac5f67dae55ae4a9088</key>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>wazuh-master-address</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>
💡 Find <Cluster> in /var/ossec/etc/ossec.conf and edit it with your detail of your cluster
Please fill in <key></key> with hex stored in cluster-key.id . Please also replace <node_name> with you master
hostname
💡 As i mention in this tutorial please ensure the use_source_ip from <auth> section was disabled like this:
<use_source_ip>no</use_source_ip>. This is to prevent any issues in High Availablity mode
vi /var/ossec/etc/ossec.conf
<cluster>
<name>wazuh</name>
<node_name>worker0</node_name>
<node_type>worker</node_type>
<key>c98b62a9b6169ac5f67dae55ae4a9088</key>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>wazuh-master-address</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>
💡 Find <cluster> in /var/ossec/etc/ossec.conf and edit it with your detail of your cluster
Please fill in <key></key> with hex stored in cluster-key.id . Please also replace <node_name> with you worker
hostname
💡 As i mention in this tutorial please ensure the use_source_ip from <auth> section was disabled like this:
<use_source_ip>no</use_source_ip>. This is to prevent any issues in High Availablity mode
💡 If the deployment of Wazuh Manager node was successfull, this is the following example:
firewall-cmd --reload
setenforce 0
vi /etc/selinux/config
echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-
WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/4.x/yum/\nprotect=1' | tee
/etc/yum.repos.d/wazuh.repo
vi /etc/wazuh-dashboard/opensearch_dashboards.yml
server.host: 0.0.0.0
server.port: 443
opensearch.hosts: ["https://10.19.1.103:9200", "https://10.19.2.104:9200"]
opensearch.ssl.verificationMode: certificate
#opensearch.username: admin
#opensearch.password: admin
opensearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opensearch_security.multitenancy.enabled: false
opensearch_security.readonly_mode.roles: ["kibana_read_only"]
server.ssl.enabled: true
server.ssl.key: "/etc/wazuh-dashboard/certs/dashboard-key.pem"
server.ssl.certificate: "/etc/wazuh-dashboard/certs/dashboard.pem"
opensearch.ssl.certificateAuthorities: ["/etc/wazuh-dashboard/certs/root-ca.pem"]
uiSettings.overrides.defaultRoute: /app/wazuh
export NODE_NAME=dashboard-hostname
systemctl daemon-reload
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
hosts:
- default:
url: https://10.19.2.100
port: 55000
username: wazuh-wui
password: wazuh-wui
run_as: false
💡 Find the line above and replace url with your Wazuh manager master ip address
💡 Please run the command above in indexer node. because this command is how to auto generate all of user and
password from serveral services and components of Wazuh. Please remember and store the password in the safe
location , please do not forget it!!!
💡 Example Output:
26/01/2023 23:16:41 INFO: Wazuh API admin credentials not provided, Wazuh API passwords not changed.
26/01/2023 23:17:06 INFO: The password for user admin is XyazwW9Pm*a38upU63MXU5zry5?+ocgc
26/01/2023 23:17:06 INFO: The password for user kibanaserver is TkjKeq0AcZKt1PES.Da3n9l6uDWDF2QQ
26/01/2023 23:17:06 INFO: The password for user kibanaro is oJe7sXjEXKy7JTN.q4PVa8FvaJr26Dom
26/01/2023 23:17:06 INFO: The password for user logstash is iWHMUei96anP0ZI7?adF?FyDWZ74hlDw
26/01/2023 23:17:06 INFO: The password for user readall is d?0TJ0+DOY4qp2jpDJ9AGXwHg.CorbVC
26/01/2023 23:17:06 INFO: The password for user snapshotrestore is blzS+*kzCffxpNjuwhKgWwcW4tnu96Eq
26/01/2023 23:17:06 WARNING: Wazuh indexer passwords changed. Remember to update the password in the
Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
💡 We will use for looping for restarting service wazuh indexer service in our indexer cluster after changing password
Example:
for ip in 10.19.2.10{3..4} ; do ssh root@$ip -C systemctl restart wazuh-indexer ; done
💡 Example Output:
26/01/2023 23:18:25 INFO: The password for Wazuh API user wazuh is vZIZKXsed06QO?x.B8fG1YYvIR.7pi2
26/01/2023 23:18:26 INFO: The password for Wazuh API user wazuh-wui is
zRvqrLt1QDH0B4+X6MoaUWiXHBGpDLR
for ip in all_ip_indexer ; do ssh root@$ip -C 'echo <admin_password> | filebeat keystore add password --stdin --force' ;
done
💡 We will use for looping for changing password and restart filbeat service in indexer nodes
Example:
💡 Example:
echo TkjKeq0AcZKt1PES.Da3n9l6uDWDF2QQ | /usr/share/wazuh-dashboard/bin/opensearch-dashboards-keystore
--allow-root add -f --stdin opensearch.password
vi /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
hosts:
- default:
url: https://10.19.2.100
port: 55000
username: wazuh-wui
password: AurnkgmG1nW4cplVmBTugmyP.?nvrESV
run_as: false
because this configuration is using dynamic allocated keystore. But, in other way you can manually set the password
static like: opensearch.password: XyazwW9Pm*a38upU63MXU5zry5?+ocgc
6. Common Troubleshooting
journalctl -u wazuh-dashboard