Prepare an Azure VM

Overview

This procedure explains how to prepare an Azure virtual machine for TOS on a private network.

Some of the sections in this procedure only need to be done once. Others need to be repeated each time you add a worker node to the deployment.

High Availability (HA)

High availability is not supported in this release.

Remote Collectors (RCs)

Remote collectors can be deployed on Azure.

Prerequisites

General Requirements

  • This procedure must be performed by an experienced Linux administrator with knowledge of network and storage configuration.

  • You must have an existing Azure subscription and resource group.

  • Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.

  • IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.

  • We do not recommend installing on your server 3rd party software not specified in the current procedure. It may impact TOS functionality and features, and it is your responsibility to verify that it is safe to use.

  • All nodes in the cluster must be running the same operating system.
  • The node's network IP must be on the same subnet as the cluster primary VIP.
  • The CPU architecture must be X86_64 with the AVX instruction set.

  • Large deployments are not supported on Azure.

Operating System Requirements

  • OS distribution:

    • Red Hat Enterprise Linux 8.10

    • Rocky Linux 8.10

  • Secure boot must be disabled.

Network Requirements

  • Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.

  • You must allow access to required Ports and Services on the firewall.

  • Allocate a 24-bit CIDR subnet for the Kubernetes service network and a16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).

    The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with:

    • Each other

    • The physical addresses of your TOS servers (see below)

    • Your external load balancer IP(s)

    • Any other subnets communicating with TOS or with TOS nodes

  • All TOS nodes need to be on the same subnet

  • DNS hostnames must be enabled on the VNet.

Create a Virtual Machine

Repeat this section for each node in the cluster (data and worker) before proceeding to the next section.

For additional help, refer to the Microsoft Azure official documentation - Create a Linux virtual machine in the Azure portal

  1. Log in to the Azure portal.

  2. Go to the Marketplace.

  3. Search for the operating system. For example: Rocky Linux for x86_64 (AMD_64) - Official.

  4. Create the machine with LVM.

    The Create a Virtual Machine page appears.

  5. In the Basics tab, enter/select the following information:

    • SubscriptionAzure subscription

    • Resource Group: Azure resource group

    • Virtual Machine Name: VM name

    • Region: VM region

    • Availability options: Select No infrastructure redundancy required

    • Security Type: Standard

    • Image: This is taken from the image you selected in the marketplace

    • Size: Size is determined by the number of CPUs required by TOS.

      • Fewer than 16 CPUs: D16s_v3

      • More than 16 CPUs: D32s_v3

    • Authentication type: Select SSH public key

    • User name: Enter tufin-admin

    • SSH public key source:

      • Primary data node - Select Generate new key pair

      • Worker nodes - Select the existing key pair

    • SSH key type: Select RSA SSH format

    • Key pair name: Use the default name provided

    • Public inbound ports: Select None

  6. Click Next: Disks to setup the OS (operating system) disk.

  7. In the Disks tab, enter/select the following information

    • OS disk size: Select a disk size that supports on-demand bursting (for example, 1 TB).
    • OS disk type - select Premium SSD
    • Encryption - default
  8. (Data node only) Click the Create and attach a new disk link to create a separate disk for the etcd database.

    1. Enter the disk details:

      • Name: Disk name

      • Source type: None

      • Size: Click Change size

        • Storage type: Select Premium SSD.

        • Size: Select a disk size that supports on-demand bursting (for example, 1 TB).

        • Performance tier: Select P40.

    2. Click OK and OK.

  9. Set Host caching to Read/write.

  10. Click Next: Networking to define the Network Interface Card (NIC) settings.

    The Virtual Network and Subnet are taken automatically from the Azure resource group.

    • Public IP: None.

    • NIC network security group: None.

    • Load balancing options: None.

  11. Click Review + Create.

    Azure will validate the VM you just created.

  12. After the validation is finished, click Create to create the virtual machine.

Configure the Disks

Repeat this section for each virtual machine in the cluster (data and worker nodes) before proceeding to the next section.

  1. In the Azure Portal, go to Virtual Machines, and click on on the virtual machine you just created

  2. Click Stop.

  3. Enable on demand bursting for the OS disk and select the performance tier.

    1. Go to Settings > Disks, and click on the OS disk.

    2. Go to Settings > Configuration, and select Enable on-demand bursting.

    3. Click Save.

    4. Go to Settings > Size + Performance, and in the Performance Tier field, select:

      • Data node: P70

      • Worker node: P40

    5. Click Save.

  4. (Data node only) Enable on demand bursting for the OS disk and select the performance tier.

    1. Go back to the virtual machine, and click on the ETCD disk.

    2. Go to Settings > Configuration, and select Enable on-demand bursting.

    3. Go to Settings > Size + Performance, and in the Performance Tier field, select:

      • Data node: P70

      • Worker node: P40

    4. Click Save.

  5. Click Start to restart the virtual machine.

Create a Load Balancer

Create a load balancer for distributing traffic to the VMs (data node and worker nodes). The load balancer is going to have inbound rules that enable communication with all the ports required by TOS. The load balancer is only created once and not each time you prepare a new VM.

  1. In the Azure Portal, go to Load Balancers.

  2. Click Create and select Standard Load Balancer.

  3. In the Basics tab, select/enter the following information:

    • Resource Group: Your Azure resource group.

    • Name: Load balancer name

    • Region:The same region as the VM

    • SKU: Standard

    • Type: Internal

  4. Click Next: Frontend IP Configuration.

  5. Click +Add a frontend IP configuration, and enter/select the following information:

    • Name: IP configuration name

    • IP version: IPv4

    • Subnet: The subnet from your Azure Resource Group

    • Assignment: Dynamic

    • Availability Zone: No zone

  6. Click Save.

  7. Click Next: Backend pools to add the virtual machine you previously created to the load balancer's backend configuration..

  8. Click +Add a backend pool, and enter/select the following information:

    • Name: Backend pool name

    • Backend pool configuration: NIC

  9. In the IP configurations section, click +Add.

  10. Select the virtual machines you created in the previous section (data and worker) and click Add.

  11. Click Save and then Next: Inbound Rules.

  12. Click + Add a load balancing rule and repeat this step to create rules for the following ports.

    While creating the first rule, you will need to also create a health probe. After the first time, you can select it for all subsequent rules.

    For each rule enter the following information:

    • Name: Rule name

    • IP version: IPv4

    • Frontend IP address The frontend IP configuration previously created for the load balancer

    • Backend pool: The virtual machine created in the previous section

    • Protocol, Port and Backend port: Use the following tables

    • Central cluster

       

       

       

      Protocol

      Port

      Backend Port

      Description

      TCP 443 Client source IP address UI Access
      TCP 443 Remote collector cluster node addresses Remote collector cluster connectivity
      TCP 61617 Remote collector cluster node addresses Remote collector cluster connectivity
      TCP 9099 Client source IP address OPM devices
      TCP 8443 Remote collector cluster node addresses Remote collector cluster connectivity
      TCP 9090 Remote collector cluster node addresses Remote collector cluster connectivity
      TCP 601 Client source IP address Unencrypted TCP syslogs
      TCP 6514 Client source IP address TLS encrypted TCP syslogs
      UDP 514 Client source IP address UDP syslogs
      UDP 161 Client source IP address SNMP monitoring

       

      Remote collector cluster

       

       

       

      Protocol

      Port range

      Backend Port

      Description

      TCP 9099 Custom OPM devices
      TCP 8443 Custom Central cluster connectivity
      TCP 601 Custom Unencrypted TCP syslogs
      TCP 6514 Custom TLS encrypted TCP syslogs
      UDP 514 Custom UDP syslogs
    • Health probe:

      First rule -

      1. Click Create New.

      2. Enter/select the following information:

        • Name: health probe name

        • Protocol: TCP

        • Port: 31443

        • Interval (seconds): 5

      3. Click Save.

      All subsequent rules - Select the health probe you created.

    When done creating the load balancing rule, click Save.

  13. Click the Next buttons until you get to the Review + create tab.

    Azure will validate the load balancer you just created.

  14. Click Create.

Create Network Inbound Security Rules

Repeat this section for each virtual machine in the cluster (data and worker nodes) before proceeding to the next section.

  1. In the virtual machine, go to Networking > Network Settings.

  2. In the Network Security Group, click Add Inbound Port Rule, and repeat this step for the following rules:

    The Priority field is automatically populated by Azure. There is no need to change the default value.

    Central cluster

    Source IP

    Port

    Destination

    Destination Port Ranges

    Protocol

    Action

    Name

    Client source IP address

    443

    Any

    31443

    TCP Allow UI Access

    Remote collector cluster node addresses

    443

    Any

    31443

    TCP Allow Remote collector cluster connectivity

    Remote collector cluster node addresses

    61617

    Any

    31617

    TCP Allow Remote collector cluster connectivity

    Client source IP address

    9099

    Any

    31099

    TCP Allow OPM devices

    Remote collector cluster node addresses

    8443

    Any

    31843

    TCP Allow Remote collector cluster connectivity

    Remote collector cluster node addresses

    9090

    Any

    31090

    TCP Allow Remote collector cluster connectivity

    Client source IP address

    601

    Any

    32514

    TCP Allow Unencrypted TCP syslogs

    Client source IP address

    6514

    Any

    31514

    TCP Allow TLS encrypted TCP syslogs

    Client source IP address

    514

    Any

    30514

    UDP Allow UDP syslogs

    Client source IP address

    161

    Any

    30161

    UDP Allow SNMP monitoring

     

    Remote Collector Cluster

    Source IP

    Port

    Destination

    Destination Port Ranges

    Protocol

    Action

    Name

    Client source IP address

    9099

    Any

    31099

    TCP Allow OPM devices

    Central cluster node addresses

    8443

    Any

    31843

    TCP Allow Central cluster connectivity

    Client source IP address

    601

    Any

    32514

    TCP Allow Unencrypted TCP syslogs

    Client source IP address

    6514

    Any

    31514

    TCP Allow TLS encrypted TCP syslogs

    Client source IP address

    514

    Any

    30514

    UDP Allow UDP syslogs

Configure the Operating System

  1. Log into the CLI using SSH with the pem key you created.

  2. If you are not currently logged in as user root, do so now.

    [<ADMIN> ~]$ sudo su -
    sudo su -
  3. If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.

    [<ADMIN> ~]# hostnamectl set-hostname <mynode>
    hostnamectl set-hostname <mynode>
  4. Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).

    [<ADMIN> ~]# echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
    echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
  5. Verify your machine time is synchronized with a trusted NTP server. If necessary, see Configuring NTP Using Chrony.

  6. Configure the server timezone.

    [<ADMIN> ~]# timedatectl set-timezone <timezone>
    timedatectl set-timezone <timezone>

    where <timezone> is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command.

    [<ADMIN> ~]# timedatectl list-timezones
    timedatectl list-timezones
  7. Upgrade the kernel:

    [<ADMIN> ~]# dnf upgrade
    dnf upgrade
  8. Reboot the machine and log in.
  9. [<ADMIN> ~]$ sudo su -
    sudo su -
  10. Install tmux and rsync:

    [<ADMIN> ~]# dnf install -y rsync tmux
    dnf install -y rsync tmux
  11. Check if the firewall is enabled. If enabled, disable it:

    [<ADMIN> ~]# systemctl stop firewalld
    systemctl stop firewalld
    [<ADMIN> ~]# systemctl disable firewalld
    systemctl disable firewalld
  12. Create the TOS load module configuration file /etc/modules-load.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/modules-load.d/tufin.conf
    vi /etc/modules-load.d/tufin.conf
  13. Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.

    br_netfilter
    overlay
    ebtables
    ebtable_filter
    br_netfilter wireguard overlay ebtables ebtable_filter
  14. Load the above modules now:

    [<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -a 
    cat /etc/modules-load.d/tufin.conf |xargs modprobe -a

    Look carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.

  15. Check that Wireguard has loaded correctly.

    [<ADMIN> ~]# lsmod |grep wireguard
    lsmod |grep wireguard

    The output will appear something like this:

    wireguard              201106  0
    ip6_udp_tunnel         12755  1 wireguard
    udp_tunnel             14423  1 wireguard
    

    If Wireguard is not listed in the output, contact support.

  16. Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/sysctl.d/tufin.conf
    vi /etc/sysctl.d/tufin.conf
  17. Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.

    net.bridge.bridge-nf-call-iptables = 1
    fs.inotify.max_user_watches = 1048576
    fs.inotify.max_user_instances = 10000
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1 fs.inotify.max_user_watches = 1048576 fs.inotify.max_user_instances = 10000 net.ipv4.ip_forward = 1
  18. Apply the above kernel settings now:

    [<ADMIN> ~]# sysctl --system
    sysctl --system
  19. For maximum security, we recommend only installing official security updates and security patches for your Linux distribution, as well as the RPMs specifically mentioned in this section.

Expand The OS Disk Partition

  1. Identify the OS disk.

    [<ADMIN> ~]$ lsblk
    lsblk
    NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda              8:0    0    1T  0 disk
    sdb              8:16   0    1T  0 disk
    ├─sdb1           8:17   0   99M  0 part /boot/efi
    ├─sdb2           8:18   0 1000M  0 part /boot
    ├─sdb3           8:19   0    4M  0 part
    ├─sdb4           8:20   0    1M  0 part
    └─sdb5           8:21   0  8.9G  0 part
       └─rocky-root 253:0   0  8.9G  0 lvm  /
  2. Install the growpart cloud utility.

    [<ADMIN> ~]$ dnf -y install cloud-utils-growpart
    dnf -y install cloud-utils-growpart
  3. Expand partition 5 to use all remaining disk storage.

    [<ADMIN> ~]$ growpart /dev/sdb 5
    growpart /dev/sdb 5
  4. Update the kernel with the changes to the partition table.

    [<ADMIN> ~]$ partprobe /dev/sdb
    partprobe /dev/sdb
  5. Resize the LVM physical volume on the expanded partition.

    [<ADMIN> ~]$ pvresize /dev/sdb5
    pvresize /dev/sdb5
  6. Extend the root logical volume and increase the filesystem.

    [<ADMIN> ~]$ lvextend -r -l +100%FREE /dev/rocky/root
    lvextend -r -l +100%FREE /dev/rocky/root
  7. Verify that the partition has been expanded correctly.

    [<ADMIN> ~]$ lsblk
    lsblk
    NAME           MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
    sda              8:0    0      1T  0 disk
    sdb              8:16   0      1T  0 disk
    ├─sdb1           8:17   0     99M  0 part /boot/efi
    ├─sdb2           8:18   0   1000M  0 part /boot
    ├─sdb3           8:19   0      4M  0 part
    ├─sdb4           8:20   0      1M  0 part
    └─sdb5           8:21   0 1022.9G  0 part
       └─rocky-root 253:0   0 1022.9G  0 lvm  /
  8.  

Data Nodes Only. Mount The etcd Database to A Separate Disk

The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance.

  1. Switch to the root user.

    [<ADMIN> ~]$ sudo su -
    sudo su -
  2. Find the name of the last disk added to the VM.

    [<ADMIN> ~]# lsblk -ndl -o NAME
    lsblk -ndl -o NAME

    The output returns the list of disks on the VM. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.

  3. Save the name of the last disk in a separate location. You will need it later for verification purposes.

  4. Run the tmux command.

    [<ADMIN> ~]$ tmux new-session -s etcd
    tmux new-session -s etcd
  5. Verify that the new disk is recognized by the operating system.

    [<ADMIN> ~]# lsblk
    lsblk
    [<ADMIN> ~]# ls -l /dev/sd*
    ls -l /dev/sd*

    Compare the output with the name of the disk you saved in the preliminary preparations, and verify that the disk name it returned ends with the next letter in the alphabet. For example, if the name you saved was sdb the output should return sdc. This indicates that the operating system recognizes the new disk.

  6. Create a variable with the block device path of the new disk.

    [<ADMIN> ~]# BLOCK_DEV="/dev/sd<>"
    BLOCK_DEV="/dev/sd<>"

    where <> represents the letter of the new disk.

  7. Generate a UUID for the block device of the new disk.

    [<ADMIN> ~]# BLOCK_UUID="$(uuidgen)"
    BLOCK_UUID="$(uuidgen)"
  8. Create a primary partition on the new disk.

    [<ADMIN> ~]# parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
    parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
  9. Verify that the partition was created.

    [<ADMIN> ~]# parted -s ${BLOCK_DEV} print
    parted -s ${BLOCK_DEV} print
  10. Format the partition as ext4.

    [<ADMIN> ~]# mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
    mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
  11. Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).

    [<ADMIN> ~]# blkid | grep "$BLOCK_UUID"
    blkid | grep "$BLOCK_UUID"
  12. Create the mount point of the etcd database.

    [<ADMIN> ~]# mkdir -p /var/lib/rancher/k3s/server/db
    mkdir -p /var/lib/rancher/k3s/server/db
  13. Set the partition to mount upon operating system startup.

    [<ADMIN> ~]# echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
    echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
  14. Load the changes to the filesystem.

    [<ADMIN> ~]# systemctl daemon-reload
    systemctl daemon-reload
  15. Mount the partition that was added to /etc/fstab.

    [<ADMIN> ~]# mount /var/lib/rancher/k3s/server/db
    mount /var/lib/rancher/k3s/server/db

    If the output is not empty, stop the procedure. The etcd disk cannot be mounted. Review what was missed in the previous steps.

  16. Verify the partition has been mounted (the output should return the block device and mount point).

    [<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"
    mount | grep "/var/lib/rancher/k3s/server/db"

    If the output is empty, stop the procedure. The etcd disk is not mounted. Review what was missed in the previous steps.

  17. You can now safely exit the tmux session:

    [<ADMIN> ~]# exit
    exit

What Next?