On This Page
Prepare an Open Server
Overview
This procedure explains how to prepare bare-metal servers or bare-metal hypervisors (type 1) running RHEL or Rocky Linux for TOS. If you want to deploy on VMware we recommend installing TOS on TufinOS, which is a hardened operating system developed and supported by Tufin. See Prepare a VMware ESXi Machine.
High Availability (HA)
High availability is supported for open server deployments, provided all requirements are met. See High availability.
Remote Collectors (RCs)
Remote collectors can be deployed on Open Server deployments, provided all requirements are met. See Remote Collectors.
Prerequisites
General Requirements
-
This procedure must be performed by an experienced Linux administrator with knowledge of network and storage configuration.
-
Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.
-
IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.
-
We do not recommend installing on your server 3rd party software not specified in the current procedure. It may impact TOS functionality and features, and it is your responsibility to verify that it is safe to use.
- All nodes in the cluster must be running the same operating system.
- The node's network IP must be on the same subnet as the cluster primary VIP.
-
The CPU architecture must be X86_64 with the AVX instruction set.
-
See the Important Installation Information in the R25-2 Release Notes.
Operating System Requirements
-
OS distribution:
-
Red Hat Enterprise Linux 8.10
-
Rocky Linux 8.10
-
-
Disks:
-
Select a storage type of SSD. Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.
-
The disk for the operating system and TOS data requires three directories: /opt, /var and /tmp. Total storage size is determined by the sizing information sent by Tufin.
-
Data nodes require an additional disk for etcd. Size: 50 GB
-
We recommend allocating the /opt partition all remaining disk space after you have partitioned the OS disk and moved etcd to a separate disk.
-
-
Secure boot must be disabled.
-
The kernel must be up-to-date
-
SELinux must be disabled (recommended), or configured to run in permissive mode, as described in Enabling SELinux in Permissive Mode.
Network Requirements
-
Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.
-
You must allow access to required Ports and Services.
-
All TOS nodes need to be on the same subnet.
-
All TOS nodes should have network latency of under 1ms.
-
Network configurations for your interface must be set to manual IPv4 with gateway and DNS Servers set to the IPs used by your organization.
-
Allocate a 24-bit CIDR subnet for the Kubernetes service network and a 16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).
The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with each other, and:
-
The physical addresses of your TOS servers (see below)
-
Your primary VIP, Syslog VIP or external load balancer IP (see below)
-
Any other subnets communicating with TOS or with TOS nodes
-
-
If a proxy is configured on your system, make sure this network is excluded.
-
You must have available the following dedicated IP addresses:
- For on-premises deployments, a primary VIP that will serve as the external IP address used to access TOS from your browser. The primary VIP will not be needed in the installation of the operating system, except in the final step - the installation command.
- The physical network IP address of the first network interface used by the administrator for CLI commands. This is the IP address you will use in most steps of the procedure.
-
If additional nodes are subsequently added to the cluster, each node will require an additional dedicated physical network IP address.
- Additional syslog VIPs can be allocated as needed.
-
The VIP, all node physical network IP addresses and all syslog VIPs must be on the first network interface.
-
Make sure your first physical interface is correctly configured and all other interfaces are not on the same network.
To find the first network interface, run the following command:
[<ADMIN> ~]$ sudo /opt/tufinos/scripts/network_interface_by_pci_order.sh | awk -F'=' '/NET_IFS\[0\]/ { print $NF }'sudo /opt/tufinos/scripts/network_interface_by_pci_order.sh | awk -F'=' '/NET_IFS\[0\]/ { print $NF }'Otherwise network errors such as connectivity failures and incorrect traffic routing might occur.
-
All unused interfaces must be disabled.
Configure Partitions
If not done already, set up partitions according to the Prerequisites.
Configure Operating System
-
If you are not currently logged in as user root, do so now.
-
If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.
-
Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).
-
Verify your machine time is synchronized with a trusted NTP server. If necessary, see Configuring NTP Using Chrony.
-
Configure the server timezone.
where
<timezone>is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command. -
Upgrade the kernel:
- Reboot the machine and log in.
-
Install tmux and rsync:
-
Check if the firewall is enabled. If enabled, disable it:
-
Create the TOS load module configuration file
/etc/modules-load.d/tufin.conf. Example using vi: -
Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.
-
Load the above modules now:
[<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -acat /etc/modules-load.d/tufin.conf |xargs modprobe -aLook carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.
-
Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:
-
Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.
-
Apply the above kernel settings now:
Move The etcd Database to A Separate Disk
The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance.
Preliminary Preparations
-
Switch to the root user.
-
Find the name of the last disk added to the open server.
The output returns the list of disks on the open server. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.
-
Save the name of the last disk in a separate location. You will need it later for verification purposes.
Move The etcd Database
-
Run the tmux command.
-
Add a disk to the open server.
-
Add a new disk that meets the following requirements
-
Storage: 50 GB
-
Performance: High-performance storage
- SSD: at least 7,500 IOPS and 250 MB/s throughput or higher
-
Disk Provisioning: Thin Provisioning
-
Sharing: No Sharing
-
Limit - IOPs: Unlimited
-
Disk Mode: Independent - Persistent
-
-
-
Mount the new disk.
-
Log into the data node as the root user.
-
Run the tmux command.
-
Verify that the new disk is recognized by the operating system.
Compare the output with the name of the disk you saved in the preliminary preparations, and verify that the disk name it returned ends with the next letter in the alphabet. For example, if the name you saved was sdb the output should return sdc. This indicates that the operating system recognizes the new disk.
-
Create a variable with the block device path of the new disk.
where <> represents the letter of the new disk.
-
Generate a UUID for the block device of the new disk.
-
Create a primary partition on the new disk.
-
Verify that the partition was created.
-
Format the partition as ext4.
-
Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).
-
Create the mount point of the etcd database.
-
Set the partition to mount upon operating system startup.
-
Load the changes to the filesystem.
-
Mount the partition that was added to /etc/fstab.
If the output is not empty, stop the procedure. The etcd disk cannot be mounted. Review what was missed in the previous steps.
-
Verify the partition has been mounted (the output should return the block device and mount point).
[<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"mount | grep "/var/lib/rancher/k3s/server/db"If the output is empty, stop the procedure. The etcd disk is not mounted. Review what was missed in the previous steps.
-
You can now safely exit the tmux session:
-
What Next?
-
For a first time install, see Install TOS
-
If you need to add a worker node to your existing deployment, see add a worker node
-
For high availability, see Deploy High Availability
- (Recommended) See the REL SSH Hardening Guide / Rocky Linux Hardening Guide as appropriate.
Was this helpful?
Thank you!
We’d love your feedback
We really appreciate your feedback
Send this page to a colleague