FortiSIEM-6.3.3-ESX Installation Guide
FortiSIEM-6.3.3-ESX Installation Guide
FortiSIEM 6.3.3
FORTINET DOCUMENT LIBRARY
https://docs.fortinet.com
FORTINET BLOG
https://blog.fortinet.com
FORTIGUARD CENTER
https://www.fortiguard.com
FEEDBACK
Email: [email protected]
10/04/2023
FortiSIEM 6.3.3 ESX Installation Guide
TABLE OF CONTENTS
Change Log 4
Fresh Installation 6
Pre-Installation Checklist 6
All-in-one Installation 7
Set Network Time Protocol for ESX 7
Import FortiSIEM into ESX 8
Edit FortiSIEM Hardware Settings 12
Start FortiSIEM from the VMware Console 12
Configure FortiSIEM via GUI 12
Upload the FortiSIEM License 18
Choose an Event Database 18
Cluster Installation 19
Install Supervisor 19
Install Workers 21
Register Workers 21
Install Collectors 22
Register Collectors 22
Installing on ESX 6.5 25
Importing a 6.5 ESX Image 25
Resolving Disk Save Error 27
Adding a 5th Disk for /data 29
Install Log 30
03/29/2019 Revision 1: updated the instructions for registering the Collector on the
Supervisor node.
04/21/2021 Revision 10: Added Installing on ESX 6.5 content to 6.2.0. Minor update to
Pre-Installation Checklist to 6.1.1 and 6.2.0.
04/22/2021 Revision 11: Added Installing on ESX 6.5 content to 6.1.0. Minor update to
Pre-Installation Checklist to 6.1.0.
4/28/2021 Revision 12: Updated Pre-Installation Checklist for 6.1.0, 6.1.1 and 6.2.0.
09/13/2021 Updated Importing a 6.5 ESX Image section for 6.3.x guides.
l Pre-Installation Checklist
l All-in-one Installation
l Cluster Installation
l Installing on ESX 6.5
Pre-Installation Checklist
l Storage type
l Online – Local or NFS or Elasticsearch
Note: compared to FortiSIEM 5.x, you need one more disk (OPT) which provides a cache for FortiSIEM.
For OPT - 100GB, the 100GB disk for /opt will consist of a single disk that will split into 2 partitions, /OPT and swap. The
partitions will be created and managed by FortiSIEM when configFSM.sh runs.
Before proceeding to FortiSIEM deployment, you must configure the external storage.
l For NFS deployment, see FortiSIEM - NFS Storage Guide here.
l For Elasticsearch deployment, see FortiSIEM - Elasticsearch Storage Guide here.
All-in-one Installation
This is the simplest installation with a single Virtual Appliance. If storage is external, then you must configure external
storage before proceeding with installation.
l Set Network Time Protocol for ESX
l Import FortiSIEM into ESX
l Edit FortiSIEM Hardware Settings
l Start FortiSIEM from the VMware Console
l Configure FortiSIEM via GUI
l Upload the FortiSIEM License
l Choose an Event Database
FortiSIEM needs accurate time. To do this you must enable NTP on the ESX host which FortiSIEM Virtual Appliance is
going to be installed.
1. Log in to your VCenter and select your ESX host.
2. Click the Configure tab.
4. Click Edit.
5. Enter the time zone properties.
1. Go to the Fortinet Support website https://support.fortinet.com to download the ESX package FSM_FULL_ALL_
ESX_6.3.3_Build0348.zip. See Downloading FortiSIEM Products for more information on downloading
products from the support website.
2. Uncompress the packages for Super/Worker and Collector (using 7-Zip tool) to the location where you want to
install the image. Identify the .ova file.
3. Right-click on your own host and choose Deploy OVF Template.
The Deploy OVA Template dialog box appears.
4. In 1 Select an OVF template select Local file and navigate to the .ova file. Click Next. If you are installing from a
URL, select URL and paste the OVA URL into the field beneath URL.
5. In 2 Select a Name and Folder, make any needed edits to the Virtual machine name field. Click Next.
6. In 3 Select a compute resource, select any needed resource from the list. Click Next.
10. In 7 Select networks, select the source and destination networks from the drop down lists. Click Next.
13. Right-click your installed OVA (example: FortiSIEM-611.0348.ova) and select Edit Settings > VM Options >
General Options . Setup Guest OS and Guest OS Version (Linux and 64-bit).
14. Open the Virtual Hardware tab. Set CPU to 16 and Memory to 64GB.
15. Click Add New Device and create a device.
Add additional disks to the virtual machine definition. These will be used for the additional partitions in the virtual
appliance. An All In One deployment requires the following additional partitions.
higher EPS deployments. See the FortiSIEM Sizing Guide for additional information.
l NFS or Elasticsearch event DB storage is mandatory for multi-node cluster deployments.
After you click OK, a Datastore Recommendations dialog box opens. Click Apply.
16. Do not turn off or reboot the system during deployment, which may take 7 to 10 minutes to complete. When the
deployment completes, click Close.
l Memory = 64 GB
l OS – 25GB
l OPT – 100GB
l CMDB – 60GB
l SVN – 60GB
1. In the VMware vSphere client, select the Supervisor, Worker, or Collector virtual appliance.
2. Right-click to open the options menu and select Power > Power On.
3. Open the Summary tab for the , select Launch Web Console.
Network Failure Message: When the console starts up for the first time you may see a Network eth0 Failed
message, but this is expected behavior.
4. Select Web Console in the Launch Console dialog box.
5. When the command prompt window opens, log in with the default login credentials – user: root and Password:
ProspectHills.
6. You will be required to change the password. Remember this password for future use.
At this point, you can continue configuring FortiSIEM by using the GUI.
6. Select the Country and City for your timezone, and press Next.
Regardless of whether you select Supervisor, Worker, or Collector, you will see the
same series of screens.
8. If you want to enable FIPS, then choose 2. Otherwise, choose 1. You have the option of enabling FIPS (option 3) or
disabling FIPS (option 4) later.
Note: After Installation, a 5th option to change your network configuration (5 change_network_config) is
available. This allows you to change your network settings and/or host name.
9. Determine whether your network supports IPv4-only, IPv6-only, or both IPv4 and IPv6 (Dual Stack). Choose 1 for
IPv4-only, choose 2 for IPv6-only, or choose 3 for both IPv4 and IPv6.
10. If you choose 1 (IPv4) or choose 3 (Both IPv4 and IPv6), and press Next, then you will move to step 11. If you
choose 2 (IPv6), and press Next, then skip to step 12.
11. Configure the IPv4 network by entering the following fields, then press Next.
Option Description
12. If you chose 1 in step 9, then you will need to skip to step 13. If you chose 2 or 3 in step 9, then you will configure the
IPv6 network by entering the following fields, then press Next.
Option Description
Note: If you chose option 3 in step 9 for both IPv4 and IPv6, then even if you configure 2 DNS servers for IPv4 and
IPv6, the system will only use the first DNS server from IPv4 and the first DNS server from the IPv6 configuration.
Note: In many dual stack networks, IPv4 DNS server(s) can resolve names to both IPv4 and IPv6. In such
environments, if you do not have an IPv6 DNS server, then you can use public IPv6 DNS servers or use IPv4-
mapped IPv6 address.
13. Configure Hostname for Supervisor. Press Next.
15. The final configuration confirmation is displayed. Verify that the parameters are correct. If they are not, then press
Back to return to previous dialog boxes to correct any errors. If everything is OK, then press Run.
Option Description
-i IPv4-formatted address
16. It will take some time for this process to finish. When it is done, proceed to Upload the FortiSIEM License. If the
VM fails, you can inspect the ansible.log file located at /usr/local/fresh-install/logs to try and
identify the problem.
Before proceeding, make sure that you have obtained valid FortiSIEM license from Forticare.
For more information, see the Licensing Guide.
For a fresh installation, you will be taken to the Event Database Storage page. You will be asked to choose between
Local Disk, NFS or Elasticsearch options. For more details, see Configuring Storage.
After the License has been uploaded, and the Event Database Storage setup is configured, FortiSIEM installation is
complete. If the installation is successful, the VM will reboot automatically. Otherwise, the VM will stop at the failed task.
You can inspect the ansible.log file located at /usr/local/fresh-install/logs if you encounter any issues
during FortiSIEM installation.
After installation completes, ensure that the phMonitor is up and running, for example:
# phstatus
Cluster Installation
For larger installations, you can choose Worker nodes, Collector nodes, and external storage (NFS or Elasticsearch).
l Install Supervisor
l Install Workers
l Register Workers
l Install Collectors
l Register Collectors
Install Supervisor
Elasticsearch
Install Workers
Once the Supervisor is installed, follow the same steps in All-in-one Install to install a Worker except only choose OS and
OPT disks. The recommended CPU and memory settings for Worker node, and required hard disk settings are:
l CPU = 8
l Memory = 24 GB
l Two hard disks:
l OS – 25GB
l OPT – 100GB
For OPT - 100GB, the 100GB disk for /opt will consist of a single disk that will split into 2 partitions, /OPT and
swap. The partitions will be created and managed by FortiSIEM when configFSM.sh runs.
Register Workers
Once the Worker is up and running, add the Worker to the Supervisor node.
1. Go to ADMIN > License > Nodes.
2. Select Worker from the drop-down list and enter the Worker's IP address and host name. Click Add.
3. See ADMIN > Health > Cloud Health to ensure that the Workers are up, healthy, and properly added to the
system.
Install Collectors
Once Supervisor and Workers are installed, follow the same steps in All-in-one Install to install a Collector except in Edit
FortiSIEM Hardware Settings, only choose OS and OPT disks. The recommended CPU and memory settings for
Collector node, and required hard disk settings are:
l CPU = 4
l Memory = 8GB
l Two hard disks:
l OS – 25GB
l OPT – 100GB
For OPT - 100GB, the 100GB disk for /opt will consist of a single disk that will split into 2 partitions, /OPT and
swap. The partitions will be created and managed by FortiSIEM when configFSM.sh runs.
Register Collectors
Enterprise Deployments
addressing change, it becomes a matter of updating the DNS rather than modifying the Event Worker IP
addresses in FortiSIEM.
b. Click OK.
3. Go to ADMIN > Setup > Collectors and add a Collector by entering:
a. Name – Collector Name
b. Guaranteed EPS – this is the EPS that Collector will always be able to send. It could send more if there is
excess EPS available.
c. Start Time and End Time – set to Unlimited.
4. SSH to the Collector and run following script to register Collectors:
# /opt/phoenix/bin/phProvisionCollector --add <user> '<password>' <Super IP or
Host> <Organization> <CollectorName>
The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.
a. Set user and password using the admin user name and password for the Supervisor.
b. Set Super IP or Host as the Supervisor's IP address.
c. Set Organization. For Enterprise deployments, the default name is Super.
d. Set CollectorName from Step 2a.
The Collector will reboot during the Registration.
5. Go to ADMIN > Health > Collector Health for the status.
b. Click OK.
c.
3. Go to ADMIN > Setup > Organizations and click New to add an Organization.
4. Enter the Organization Name, Admin User, Admin Password, and Admin Email.
5. Under Collectors, click New.
6. Enter the Collector Name, Guaranteed EPS, Start Time, and End Time.
The last two values could be set as Unlimited. Guaranteed EPS is the EPS that the Collector will always be able to
send. It could send more if there is excess EPS available.
The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.
a. Set user and password using the admin user name and password for the Organization that the Collector is
going to be registered to.
b. Set Super IP or Host as the Supervisor's IP address.
c. Set Organization as the name of an organization created on the Supervisor.
d. Set CollectorName from Step 6.
When installing with ESX 6.5, or an earlier version, you will get an error message when you attempt to import the image.
To resolve this import issue, you will need to take the following steps:
1. Install 7-Zip.
2. Extract the OVA file into a directory.
3. In the directory where you extracted the OVA file, edit the file FortiSIEM-VA-6.3.3.0348.ovf, and replace all
references to vmx-15 with your compatible ESX hardware version shown in the following table.
Note: For example, for ESX 6.5, replace vmx-15 with vmx-13.
Note: For example, for ESX 6.5, replace vmx-15 with vmx-13.
Compatibility Description
EXSi 6.5 and This virtual machine (hardware version 13) is compatible with ESXi 6.5.
later
EXSi 6.0 and This virtual machine (hardware version 11) is compatible with ESXi 6.0 and ESXi 6.5.
later
EXSi 5.5 and This virtual machine (hardware version 10) is compatible with ESXi 5.5, ESXi 6.0, and ESXi
later 6.5.
EXSi 5.1 and This virtual machine (hardware version 9) is compatible with ESXi 5.1, ESXi 5.5, ESXi 6.0, and
later ESXi 6.5.
EXSi 5.0 and This virtual machine (hardware version 8) is compatible with ESXI 5.0, ESXi 5.1, ESXi 5.5,
later ESXi 6.0, and ESXi 6.5.
Compatibility Description
ESX/EXSi 4.0 This virtual machine (hardware version 7) is compatible with ESX/ESXi 4.0, ESX/ESXi 4.1,
and later ESXI 5.0, ESXi 5.1, ESXi 5.5, ESXi 6.0, and ESXi 6.5.
EXS/ESXi 3.5 This virtual machine (hardware version 4) is compatible with ESX/ESXi 3.5, ESX/ESXi 4.0,
and later ESX/ESXi 4.1, ESXI 5.1, ESXi 5.5, ESXi 6.0, and ESXi 6.5. It is also compatible with VMware
Server 1.0 and later. ESXi 5.0 does not allow creation of virtual machines with ESX/ESXi 3.5
and later compatibility, but you can run such virtual machines if they were created on a host
with different compatibility.
ESX Server 2.x This virtual machine (hardware version 3) is compatible with ESX Server 2.x, ESX/ESXi 3.5,
and later ESX/ESXi 4.0, ESX/ESXi 4.1, and ESXI 5.0. You cannot create, edit, turn on, clone, or migrate
virtual machines with ESX Server 2.x compatibility. You can only register or upgrade them.
You may encounter an error message asking you to select a valid controller for the disk if you attempt to add an
additional 4th disk (/opt, /cmd, /svn, and /data). This is likely due to an old IDE controller issue in VMware, where
you are normally limited to 2 IDE controllers, 0, 1, and 2 disks per controller (Master/Slave).
If you are attempting to add 5 disks in total, such as this following example, you will need to take the following steps:
Disk Usage
1. Go to Edit settings, and add each disk individually, clicking save after adding each disk.
When you reach the 4th disk, you will receive the "Please select a valid controller for the disk" message. This is
because the software has failed to identify the virtual device node controller/Master or Slave for some unknown
reason.
2. Expand the disk setting for each disk and review which IDE Controller Master/Slave slots are in use. For example, in
one installation, there may be an attempt for the 4th disk to be added to IDE Controller 0 when the Master/Slave
slots are already in use. In this situation, you would need to put the 4th disk on IDE Controller 1 in the Slave position,
as shown here. In your situation, make the appropriate configuration setting change.
When you need to add a 5th disk, such as for /data, and there is no available slot, you will need to add a SATA
controller to the VM by taking the following steps:
1. Go to Edit settings.
2. Select Add Other Device, and select SCSI Controller (or SATA).
You will now be able to add a 5th disk for /data, and it should default to using the additional controller. You should be
able to save and power on your VM. At this point, follow the normal instructions for installation.
Note: When adding the local disk in the GUI, the path should be /dev/sda or /dev/sdd. You can use one of the
following commands to locate:
# fdisk-l
or
# lsblk
Install Log
Copyright© 2023 Fortinet, Inc. All rights reserved. Fortinet®, FortiGate®, FortiCare® and FortiGuard®, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein
may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were
attained in internal lab tests under ideal conditions, and actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance
results. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract,
signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only
the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For absolute clarity, any such warranty will be limited to performance in the same ideal
conditions as in Fortinet’s internal lab tests. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change,
modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.