Prepare an Azure VM

Overview

This procedure explains how to prepare an Azure virtual machine for TOS.

High Availability (HA)

High availability is not supported in this release.

Remote Collectors (RCs)

Remote collectors can be deployed on Azure.

Prerequisites

General Requirements

  • This procedure must be performed by an experienced Linux administrator with knowledge of network and storage configuration.

  • To ensure optimal performance and reliability, the required resources need to always be available for TOS. If resources become unavailable, this will affect TOS performance.

  • Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.

  • IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.

  • We do not recommend installing on your server 3rd party software not specified in the current procedure. It may impact TOS functionality and features, and it is your responsibility to verify that it is safe to use.

  • All nodes in the cluster must be running the same operating system.
  • The node's network IP must be on the same subnet as the cluster primary VIP.
  • Large deployments are not supported on Azure.

Operating System Requirements

  • OS distribution:

    • Red Hat Enterprise Linux 8.10

    • Rocky Linux 8.10

  • Disks:

    • Select a storage type of SSD. Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.

    • The disk for the operating system and TOS data requires three partitions: /opt, /var and /tmp. Total storage size is determined by the sizing information sent by Tufin.

    • Partition sizes:

       

      Data

      Worker

      /opt Minimum: 400 GB 50 GB
      /var 200 GB 50 GB
      /tmp 25 GB 15 GB
    • Data nodes require an additional disk for etcd. Size: 50 GB

    • Do not add additional disks before installing TOS. If additional storage is later required, you can extend the partition size by adding an additional disk after TOS has been installed.

    • We recommend allocating the /opt partition all remaining disk space after you have partitioned the OS disk and moved etcd to a separate disk.

  • Secure boot must be disabled.

  • The kernel must be up-to-date

  • SELinux must be disabled (recommended), or configured to run in permissive mode, as described in Enabling SELinux in Permissive Mode.

  • Language: en-US

  • You must have permissions to execute TOS CLI commands located in directory /usr/local/bin/tos and to use sudo if necessary.

  • To run TOS CLI commands without specifying the full path (/usr/local/bin/tos), your environment path must be modified accordingly.

  • The server timezone must be set.

  • The rpcbind service is disabled by default when updating to this version, preventing NFS 3 from working. However, it can be enabled by running the following commands in sequence:

    systemctl unmask rpcbind.socket rpcbind.service
    systemctl unmask rpcbind.socket rpcbind.service
    systemctl start rpcbind.socket rpcbind.service
    systemctl start rpcbind.socket rpcbind.service
    systemctl enable rpcbind.socket rpcbind.service
    systemctl enable rpcbind.socket rpcbind.service

Network Requirements

  • Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.

  • You must allow access to required Ports and Services on the firewall.

  • Allocate a 24-bit CIDR subnet for the Kubernetes service network and a16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).

    The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with:

    • Each other

    • The physical addresses of your TOS servers (see below)

    • Your external load balancer IP(s)

    • Any other subnets communicating with TOS or with TOS nodes

  • All TOS nodes need to be on the same subnet

  • DNS hostnames must be enabled on your VNet.

Create a Virtual Machine

Read and understand the Prerequisites before you start.

For additional help, refer to the MicrosoftAzure official documentation - Create a Linux virtual machine in the Azure portal

  1. Log in to your Azure portal.

  2. Navigate to Marketplace and create a VM.

  3. Under the Basics tab, enter/select the following information:

    • Subscription - your Azure subscription name

    • Resource Group - your Azure resource group name

    • Virtual Machine Name - a name of your choice to identify the VM e.g. myVirtualMachine-0

    • Region - select from the list

    • Image - select one of the following from the list:

      • Red Hat Enterprise Linux 8.10

      • Rocky Linux 8.10

      The image must include Logical Volume Management (LVM), which is required to enlarge the volumes.

    • Size - select CPUs and memory as advised by your account team

    • Authentication type - select SSH public key

    • User name - enter azureuser

    • SSH public key source - select generate new key pair

    • Key pair name - use the default name provided

  4. Under the Disks tab, enter/select the following information

    • OS disk type - select Premium SSD
    • Encryption - default
  5. Select Create and attach a new disk.

  6. Enter the disk details:

    • Enter a Name of your choice.

    • Set Host caching to Read/write.

    • Select a storage type Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.

    • Set Storage Size GiB to the disk space sizing requirement given to you by your account team.

  7. Save the disk defintions.

  8. If you want a public IP:

    • Select the Networking tab

    • Under Public IP, click Create new.

    • Enter a name.

    • Select SKU - Standard.

  9. Under the Tags tab, create two tags, replacing <your name> and <your environment name> with names of your choice:

    • owner: <your name>

    • env: <your environment name>

  10. Click Review and Create

  11. Click Create. The generate new key pair prompt appears.

  12. Generate a new key pair. Click Download private key and create resource. The private key will be downloaded to your PC as a file with suffix .pem. It will be needed to log into the VM console.

  13. Navigate to the directory on your PC that contains the .pem file just downloaded (<pem_key_name>) and change its permissions to prevent other users from running it.

    If your PC is running on a Linux-like operating system:

    [<ADMIN> ~]# chmod 400 <pem_key_name>
    chmod 400 <pem_key_name>
  14. You can now log in to the VM console whenever required.

    Log in to the Azure VM console where <pem_key_name> is the name of the .pem file downloaded previously from the Azure portal, <azureuser> is the name of your Azure user on the VM and <IP> is its private or public IP. The private and optional public IPs can be seen on the Networking tab.

    [<ADMIN> ~]# ssh -i <pem_key_name> <azureuser>@<IP>
    ssh -i <pem_key_name> <azureuser>@<IP>
  15. In the portal, select the newly created VM and then select the Networking tab.

  16. Select Add inbound port rule and create a rule with the properties below:

    • SourceAny

    • Source port ranges: *

    • Destination: Any

    • Destination port ranges: 31443, 31617, 31514, 31099, 31843, 30161, 30514, 31161 - see Ports and Services to see why

    • Protocol: Any

    • Action: Allow

    • Priority: 310

    • Name: TOS_Aurora or other name of your choice

    • Description: TOS_Aurora or other description of your choice

Set Up The Load Balancer

For additional help, refer to the MicrosoftAzure official documentation - Create a public load balancer to load balance VMs using the Azure portal and Configure port forwarding in Azure Load Balancer using the portal

  1. Log in to your Azure portal.

  2. Navigate to Load Balancers.

  3. Select Create. The Create load balancer window appears.
  4. Enter the following details:

    • Resource Group - the Azure resource group specified for your VM
    • Name - a suitable name for the load balancer
    • Type: Public if you want a public IP address, otherwise Internal.
    • SKU: Standard
    • Public IP address: (When type is Public). This is different from the public IP address of your VM. If you already have an IP address for the load balancer, select Use existing and enter the IP, otherwise leave the default Create new.
    • Public IP address name : (When type is Public). Enter a suitable name to refer to the public IP address.
    • IP Address Assignment: (When type is Internal). Select Dynamic.
    • Availability zone: Select Zone redundant.
    • Tags. Specify two tags, replacing <your name> and <your environment name> with names of your choice. It is recommended to use the same values used for your VM:

      • owner: <your name>
      • env: <your environment name>

    Leave the default values for all other fields.

    Click Create.

  5. Select the load balancer you just created.
  6. Select Overview and note the public IP address; you will need this shortly.
  7. Configure the load balancer backend pool:

    1. Select Backend pools.
    2. Select Add. The Add backend pool window appears.
    3. Enter details:

      • Name - a suitable name of your choice for the backend pool
      • Virtual Network - your virtual network
      • Virtual Machines - add the VM you created previously
    4. Click Add at the bottom of the page. The pool will be added.

  8. Configure the load balancer health probe:

    1. Select Health Probes.
    2. Select Add. The Add health probe window appears.
    3. Enter details:

      • Name - a suitable name of your choice for the health probe
      • Protocol: TCP
      • Port: 31443
      • Interval5
      • Unhealthy threshold: 2
    4. Click Add at the bottom of the page. The health probe will be added.

  9. Configure load balancing rules:

    1. Select Load Balancing Rules.
    2. Select Add The Add load balancing rule window appears.
    3. Add a load balancing rule with the following values:

      • Name: Any name of your choice
      • Frontend IP address: Enter the public IP address of the loadbalancer, noted above from the overview
      • Protocol: TCP
      • Port: 443
      • Backend port: 31443
      • Backend pool: Select the backend pool added above
      • Health probe: Select the health probe added above
      • Leave other fields with their default values.
    4. Click Add at the bottom of the page. The rule will be added.

    5. Repeat steps b. to d. for additional load balancing rules, entering a suitable name for each, with protocol, port and backend port. All rules are listed below - see Ports and Services for more information.

      Protocol

      Source

      Target

      Purpose

      TCP 443 31443 UI access
      TCP 61617 31617 Remote collector connectivity
      TCP 9099 31099 OPM devices
      TCP 8443 31843 Remote collector connectivity
      TCP 9090 31090 Remote collector connectivity
      TCP 601 32514 Unencrypted TCP syslogs
      TCP 6514 31514 TLS encrypted TCP syslogs
      UDP 514 30514 UDP syslogs
      UDP 161 30161 SNMP monitoring
      UDP 10161 31161 SNMP monitoring
    6. Click OK.

Configure Partitions

If not done already, set up partitions according to the Prerequisites.

Configure the Operating System

  1. If you are not currently logged in as user root, do so now.

    [<ADMIN> ~]$ su -
    su -
  2. If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.

    [<ADMIN> ~]# hostnamectl set-hostname <mynode>
    hostnamectl set-hostname <mynode>
  3. Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).

    [<ADMIN> ~]# echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
    echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
  4. Synchronize your machine time with a trusted NTP server. Follow the steps in Configuring NTP Using Chrony.

  5. Configure the server timezone.

    [<ADMIN> ~]# timedatectl set-timezone <timezone>
    timedatectl set-timezone <timezone>

    where <timezone> is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command.

    [<ADMIN> ~]# timedatectl list-timezones
    timedatectl list-timezones
  6. Upgrade the kernel:

    [<ADMIN> ~]# dnf upgrade
    dnf upgrade
  7. Reboot the machine and log in.
  8. Install Wireguard. This is needed to encrypt communication between nodes (machines) within the cluster. The wireguard version must match the operating version you are installing.

  9. [<ADMIN> ~]# sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
    sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
    [<ADMIN> ~]# sudo yum install kmod-wireguard wireguard-tools
    sudo yum install kmod-wireguard wireguard-tools
  10. Reboot the machine and log in.
  11. Install tmux and rsync:

    [<ADMIN> ~]# dnf install -y rsync tmux
    dnf install -y rsync tmux
  12. Disable the firewall:

    [<ADMIN> ~]# systemctl stop firewalld
    systemctl stop firewalld
    [<ADMIN> ~]# systemctl disable firewalld
    systemctl disable firewalld
  13. Create the TOS load module configuration file /etc/modules-load.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/modules-load.d/tufin.conf
    vi /etc/modules-load.d/tufin.conf
  14. Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.

    br_netfilter
    wireguard
    overlay
    ebtables
    ebtable_filter
    br_netfilter wireguard overlay ebtables ebtable_filter
  15. Load the above modules now:

    [<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -a 
    cat /etc/modules-load.d/tufin.conf |xargs modprobe -a

    Look carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.

  16. Check that Wireguard has loaded correctly.

    [<ADMIN> ~]# lsmod |grep wireguard
    lsmod |grep wireguard

    The output will appear something like this:

    wireguard              201106  0
    ip6_udp_tunnel         12755  1 wireguard
    udp_tunnel             14423  1 wireguard
    

    If Wireguard is not listed in the output, contact support.

  17. Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/sysctl.d/tufin.conf
    vi /etc/sysctl.d/tufin.conf
  18. Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.

    net.bridge.bridge-nf-call-iptables = 1
    fs.inotify.max_user_watches = 1048576
    fs.inotify.max_user_instances = 10000
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1 fs.inotify.max_user_watches = 1048576 fs.inotify.max_user_instances = 10000 net.ipv4.ip_forward = 1
  19. Apply the above kernel settings now:

    [<ADMIN> ~]# sysctl --system
    sysctl --system
For maximum security, we recommend only installing official security updates and security patches for your Linux distribution, as well as the RPMs specifically mentioned in this section.

Data Nodes Only. Mount The etcd Database to A Separate Disk

The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance.

Preliminary Preparations

  1. Switch to the root user.

    [<ADMIN> ~]$ sudo su -
    sudo su -
  2. Install the rsync RPM.

    [<ADMIN> ~]$ dnf install rsync
    dnf install rsync
  3. Find the name of the last disk added to the VM.

    [<ADMIN> ~]# lsblk -ndl -o NAME
    lsblk -ndl -o NAME

    The output returns the list of disks on the VM. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.

  4. Save the name of the last disk in a separate location. You will need it later for verification purposes.

Add a disk to the VM.

  1. In the Azure VM, go to the Disks pane, and under Data Disks, click Create and Attach a new disk.

  2. Configure the following settings:

    • Disk name

    • LUN

    • Storage type: Premium SSD LRS

    • Size: Allocate a disk size of at least 50 GB

    • Max IOPS: 7500

    • Max throughput: 250 MBps

    • Encryption: Use the default setting

    • Host caching: read/write

  3. Click the Edit ()button, and in the Size field click the Change Size link.

  4. Verify that the performance tier is P40.

  5. Click OK.

Mount the new disk.

  1. Run the tmux command.

    [<ADMIN> ~]$ tmux new-session -s etcd
    tmux new-session -s etcd
  1. Log into the data node as the root user.

  2. Run the tmux command.

    [<ADMIN> ~]$ tmux new-session -s etcd
    tmux new-session -s etcd
  3. Verify that the new disk is recognized by the operating system.

    [<ADMIN> ~]# lsblk
    lsblk
    [<ADMIN> ~]# ls -l /dev/sd*
    ls -l /dev/sd*

    Compare the output with the name of the disk you saved in the preliminary preparations, and verify that the disk name it returned ends with the next letter in the alphabet. For example, if the name you saved was sdb the output should return sdc. This indicates that the operating system recognizes the new disk.

  4. Create a variable with the block device path of the new disk.

    [<ADMIN> ~]# BLOCK_DEV="/dev/sd<>"
    BLOCK_DEV="/dev/sd<>"

    where <> represents the letter of the new disk.

  5. Generate a UUID for the block device of the new disk.

    [<ADMIN> ~]# BLOCK_UUID="$(uuidgen)"
    BLOCK_UUID="$(uuidgen)"
  6. Create a primary partition on the new disk.

    [<ADMIN> ~]# parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
    parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
  7. Verify that the partition was created.

    [<ADMIN> ~]# parted -s ${BLOCK_DEV} print
    parted -s ${BLOCK_DEV} print
  8. Format the partition as ext4.

    [<ADMIN> ~]# mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
    mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
  9. Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).

    [<ADMIN> ~]# blkid | grep "$BLOCK_UUID"
    blkid | grep "$BLOCK_UUID"
  10. Create the mount point of the etcd database.

    [<ADMIN> ~]# mkdir -p /var/lib/rancher/k3s/server/db
    mkdir -p /var/lib/rancher/k3s/server/db
  11. Set the partition to mount upon operating system startup.

    [<ADMIN> ~]# echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
    echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
  12. Load the changes to the filesystem.

    [<ADMIN> ~]# systemctl daemon-reload
    systemctl daemon-reload
  13. Mount the partition that was added to /etc/fstab.

    [<ADMIN> ~]# mount /var/lib/rancher/k3s/server/db
    mount /var/lib/rancher/k3s/server/db

    If the output is not empty, stop the procedure. The etcd disk cannot be mounted. Review what was missed in the previous steps.

  14. Verify the partition has been mounted (the output should return the block device and mount point).

    [<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"
    mount | grep "/var/lib/rancher/k3s/server/db"

    If the output is empty, stop the procedure. The etcd disk is not mounted. Review what was missed in the previous steps.

  15. You can now safely exit the tmux session:

    [<ADMIN> ~]# exit
    exit

What Next?