Prepare an Open Server

Overview

This procedure explains how to prepare bare-metal servers or hypervisors running RHEL or Rocky Linux for TOS. If you want to deploy on VMware we recommend installing TOS on TufinOS, which is a hardened operating system developed and supported by Tufin. See Prepare a VMware ESXi Machine.

For all other deployment options, see Prepare the Server.

Worker Nodes

If your TOS deployment requires additional resources, after installing and setting up TOS you can add worker nodes to the cluster. See multi-node cluster.

High Availability (HA)

TOS can be set up to run as a high availability environment using three servers (data nodes).

Distributed Deployment Using Remote Collectors

TOS can be set up to run as a distributed architecture using remote collectors (RCs).

The current procedure is meant for installing on both central and remote collector clusters. See remote collectors.

Prerequisites

General Requirements

  • This procedure must be performed by an experienced Linux administrator with knowledge of network configuration.

  • To ensure optimal performance and reliability, the required resources need to always be available for TOS. If resources become unavailable, this will affect TOS performance. Do not oversubscribe resources.

  • Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.

  • IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.

Operating System Requirements

  • OS distribution:

    • Red Hat Enterprise Linux 8.10

    • Rocky Linux 8.10

  • The image file must include Logical Volume Management (LVM), which is required to enlarge the volumes.

  • Disks:

    • Select a storage type of SSD. Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.

    • The disk for the operating system and TOS data requires three partitions: /opt, /var and /tmp.

    • Partition sizes:

      • /opt: Use the Sizing Calculator to determine the partition size

      • /var: 200 GB

      • /tmp: 25 GB

    • Data nodes require an additional disk for etcd. Size: 50 GB

    • We recommend allocating the /opt partition all remaining disk space after you have partitioned the OS disk and moved etcd to a separate disk.

  • Secure boot must be disabled.

  • The kernel must be up-to-date

  • SELinux must be disabled

  • Language: en-US

  • You must have permissions to execute TOS CLI commands located in directory /usr/local/bin/tos and to use sudo if necessary.

  • To run TOS CLI commands without specifying the full path (/usr/local/bin/tos), your environment path must be modified accordingly.

  • The server timezone must be set.

Network Requirements

  • Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.

  • You must allow access to required Ports and Services.

  • All TOS nodes need to be on the same subnet and layer 2 network that supports ARP (address resolution protocol).

  • All TOS nodes should have network latency of under 1ms.

  • Network configurations for your interface must be set to manual IPv4 with gateway and DNS Servers set to the IPs used by your organization.

    The system will use a reverse DNS lookup (PTR record) to resolve the DNS IP addresses with the domain name during the TOS installation. Therefore you have to add these PTR records to the DNS server. If you do not, the TOS installation will fail.
  • Allocate a 24-bit CIDR subnet for the Kubernetes service network and a16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).

    The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with each other, and:

    • The physical addresses of your TOS servers (see below)

    • Your primary VIP, Syslog VIP or external load balancer IP (see below)

    • Any other subnets communicating with TOS or with TOS nodes

  • If a proxy is configured on your system make sure this network is excluded.

  • You must have available the following dedicated IP addresses:

    • For on-premise deployments, a primary VIP that will serve as the external  IP address used to access TOS from your browser. The primary VIP will not be needed in the installation of the operating system, except in the final step - the installation command.
    • The physical network IP address of the first network interface used by the administrator for CLI commands. This is the IP address you will use in most steps of the procedure.
    • If additional nodes are subsequently added to the cluster, each node will require an additional dedicated physical network IP address.

    • Additional syslog VIPs can be allocated as needed.
    • The VIP, all node physical network IP addresses and all syslog VIPs must be on the first network interface.

    • Make sure your first physical interface is correctly configured and all other interfaces are not on the same network.

      To find the first network interface, run the following command:

      [<ADMIN> ~]$ sudo /opt/tufinos/scripts/network_interface_by_pci_order.sh | awk -F'=' '/NET_IFS\[0\]/ { print $NF }'
      sudo /opt/tufinos/scripts/network_interface_by_pci_order.sh | awk -F'=' '/NET_IFS\[0\]/ { print $NF }'

      Otherwise network errors such as connectivity failures and incorrect traffic routing might occur.

Configure Partitions

If not done already, set up partitions according to the Prerequisites.

Configure The Operating System

  1. If you are not currently logged in as user root, do so now.

    [<ADMIN> ~]$ su -
    su -
  2. If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.

    [<ADMIN> ~]# hostnamectl set-hostname <mynode>
    hostnamectl set-hostname <mynode>
  3. Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).

    [<ADMIN> ~]# echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
    echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
  4. Synchronize your machine time with a trusted NTP server. Follow the steps in Configuring NTP Using Chrony.

  5. Configure the server timezone.

    [<ADMIN> ~]# timedatectl set-timezone <timezone>
    timedatectl set-timezone <timezone>

    where <timezone> is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command.

    [<ADMIN> ~]# timedatectl list-timezones
    timedatectl list-timezones
  6. Upgrade the kernel:

    [<ADMIN> ~]# dnf upgrade
    dnf upgrade
  7. Disable SELinux:

    • If file /etc/selinux/config exists, edit and change the value of SELINUX to disabled:

      SELINUX=disabled
    • If the file doesn't exist or SELINUX is already set to disabled, do nothing.
  8. Reboot the machine and log in.
  9. Install Wireguard. This is needed to encrypt communication between nodes (machines) within the cluster. The wireguard version must match the operating version you are installing.

  10. [<ADMIN> ~]# sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
    sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
    [<ADMIN> ~]# sudo yum install kmod-wireguard wireguard-tools
    sudo yum install kmod-wireguard wireguard-tools
  11. Reboot the machine and log in.
  12. Install tmux and rsync:

    [<ADMIN> ~]# dnf install -y rsync tmux
    dnf install -y rsync tmux
  13. Disable the firewall:

    [<ADMIN> ~]# systemctl stop firewalld
    systemctl stop firewalld
    [<ADMIN> ~]# systemctl disable firewalld
    systemctl disable firewalld
  14. Create the TOS load module configuration file /etc/modules-load.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/modules-load.d/tufin.conf
    vi /etc/modules-load.d/tufin.conf
  15. Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.

    br_netfilter
    wireguard
    overlay
    ebtables
    ebtable_filter
    br_netfilter wireguard overlay ebtables ebtable_filter
  16. Load the above modules now:

    [<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -a 
    cat /etc/modules-load.d/tufin.conf |xargs modprobe -a

    Look carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.

  17. Check that Wireguard has loaded correctly.

    [<ADMIN> ~]# lsmod |grep wireguard
    lsmod |grep wireguard

    The output will appear something like this:

    wireguard              201106  0
    ip6_udp_tunnel         12755  1 wireguard
    udp_tunnel             14423  1 wireguard
    

    If Wireguard is not listed in the output, contact support.

  18. Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/sysctl.d/tufin.conf
    vi /etc/sysctl.d/tufin.conf
  19. Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.

    net.bridge.bridge-nf-call-iptables = 1
    fs.inotify.max_user_watches = 1048576
    fs.inotify.max_user_instances = 10000
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1 fs.inotify.max_user_watches = 1048576 fs.inotify.max_user_instances = 10000 net.ipv4.ip_forward = 1
  20. Apply the above kernel settings now:

    [<ADMIN> ~]# sysctl --system
    sysctl --system
For maximum security, we recommend only installing official security updates and security patches for your Linux distribution, as well as the RPMs specifically mentioned in this section.

Move The etcd Database to A Separate Disk

The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance. This will require some down time as you are going to have to shut down TOS before separating the disks.

Preliminary Preparations

  1. Switch to the root user.

    [<ADMIN> ~]$ sudo su -
    sudo su -
  2. Find the name of the last disk added to the open server.

    [<ADMIN> ~]# lsblk -ndl -o NAME
    lsblk -ndl -o NAME

    The output returns the list of disks on the open server. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.

  3. Save the name of the last disk in a separate location. You will need it later for verification purposes.

Move The etcd Database