Prepare a GCP VM Instance

Overview

This procedure explains how to prepare a Google Cloud Platform (GCP) virtual machine instance for TOS.

Syslog Destination

Due to a GCP limitation, UDP syslogs cannot be sent to the load balancer and must instead be sent directly to the node - see Sending Additional Information Using Syslog.

High Availability (HA)

High availability is supported for GCP over three availability zones, giving you a higher level of resilience and availability when deploying on this cloud platform. Note that all availability zones must be in the same region. See High availability.

Remote Collectors (RCs)

Remote collectors can be deployed on GCP. They are supported in and between different GCP regions.

Prerequisites

General Requirements

  • This procedure must be performed by an experienced Linux administrator with knowledge of network and storage configuration.

  • To ensure optimal performance and reliability, the required resources need to always be available for TOS. If resources become unavailable, this will affect TOS performance.

  • Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.

  • IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.

  • We do not recommend installing on your server 3rd party software not specified in the current procedure. It may impact TOS functionality and features, and it is your responsibility to verify that it is safe to use.

  • All nodes in the cluster must be running the same operating system.
  • The node's network IP must be on the same subnet as the cluster primary VIP.

Operating System Requirements

  • OS distribution:

    • Red Hat Enterprise Linux 8.10

    • Rocky Linux 8.10

  • Disks:

    • Select a storage type of SSD. Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.

    • The disk for the operating system and TOS data requires three partitions: /opt, /var and /tmp. Total storage size is determined by the sizing information sent by Tufin.

    • Partition sizes:

       

      Data

      Worker

      /opt Minimum: 400 GB 50 GB
      /var 200 GB 50 GB
      /tmp 25 GB 15 GB
    • Data nodes require an additional disk for etcd. Size: 50 GB

    • Do not add additional disks before installing TOS. If additional storage is later required, you can extend the partition size by adding an additional disk after TOS has been installed. In HA deployments, the additional disk needs to be added to all data nodes.

    • We recommend allocating the /opt partition all remaining disk space after you have partitioned the OS disk and moved etcd to a separate disk.

  • Secure boot must be disabled.

  • The kernel must be up-to-date

  • SELinux must be disabled (recommended), or configured to run in permissive mode, as described in Enabling SELinux in Permissive Mode.

  • Language: en-US

  • You must have permissions to execute TOS CLI commands located in directory /usr/local/bin/tos and to use sudo if necessary.

  • To run TOS CLI commands without specifying the full path (/usr/local/bin/tos), your environment path must be modified accordingly.

  • The server timezone must be set.

  • The rpcbind service is disabled by default when updating to this version, preventing NFS 3 from working. However, it can be enabled by running the following commands in sequence:

    systemctl unmask rpcbind.socket rpcbind.service
    systemctl unmask rpcbind.socket rpcbind.service
    systemctl start rpcbind.socket rpcbind.service
    systemctl start rpcbind.socket rpcbind.service
    systemctl enable rpcbind.socket rpcbind.service
    systemctl enable rpcbind.socket rpcbind.service

Network Requirements

    Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.

  • You must allow access to required Ports and Services on the firewall.

  • Allocate a 24-bit CIDR subnet for the Kubernetes service network and a16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).

    The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with:

    • Each other

    • The physical addresses of your TOS servers (see below)

    • Your external load balancer IP(s)

    • Any other subnets communicating with TOS or with TOS nodes

  • All TOS nodes need to be on the same subnet

Launch The Instance

Read and understand the Prerequisites before you start.

For additional help, refer to the official documentation Create and start a VM instance.

  1. In the Google Cloud console, navigate to Compute Engine > VM instances

  2. Click +CREATE INSTANCE.

  3. Configure:

    • Name:

    • Region/Zone: As appropriate

    • Machine family: GENERAL-PURPOSE

    • Series: E2

    • Machine type: Custom

    • Cores / Memory: According to the load-model parameter value received from your account team.

  4. Under Boot Disk, click CHANGE.

  5. Configure the boot disk.

    • Make sure tab Public Images is selected.

    • Operating system: Red Hat Enterprise Linux or Rocky Linux

    • Version:

      • Red Hat Enterprise Linux 8.10

      • Rocky Linux 8.10

    • Boot disk type: SSD persistent disk

    • Size: The value received from your account team.

  6. Click Select. The boot disk settings will be saved.

  7. Select Advanced options.

  8. Select Networking.

  9. Configure Networking:

    • Network tags: default-allow-sshallow-nginx-ingress

    • Hostname: Optional

  10. (Optional) Host name: Type a host name or use the default device value.

  11. Click CREATE.

Create a Firewall Rule for Traffic

Create a firewall rule to allow traffic to nginx nodeport and health checking

Leave default values for all parameters not mentioned.

  1. Navigate to VPC Network > Firewall

  2. Click +CREATE FIREWALL RULE

  3. Configure:

    • Name: allow-nginx-ingress

    • Network: default

    • Direction: Ingress

    • Action: Allow

    • Targets: Specified target tags

    • Target tags: allow-nginx-ingress

    • Source filter: IPv4 ranges

    • Source IPv4 ranges: 0.0.0.0/0

    • Second sourced filter: None

    • Protocols and ports: Specified ports and protocols - TCP 31443

  4. Click CREATE.

Create an Instance Group

For more information about managing instance groups, see the vendor documentation Group unmanaged VMs together.

Leave default values for all parameters not mentioned.

  1. Navigate to Compute Engine > Instance groups

  2. Click +CREATE INSTANCE GROUP.

  3. Select New unmanaged instance group (not the stateless or stateful options).

  4. Configure:

    • Name:

    • Location: As defined for your VM

    • Network / Subnetwork: Same network used by the instance or Default

    • VM Instances: Select the new VM created previously

  5. Under Port mapping, click ADD PORT

  6. Enter port details

    • Name: https

    • Number: 31443

  7. Add additional TCP ports, by repeating the previous two steps, depending on the features you intend to use from the list below.

    Port

     

    Purpose

     

    Cluster Type

    (Central cluster/Remote collector)

    31099

    OPM devices

    Central cluster

    31514

    TCP syslogs

    Central cluster

    31422

    Event Streaming

    Central cluster/remote collector

    8443

    RC sync - Revisions and control

    Remote collector

    31617

    JMS - configuration metrics and device statuses

    Remote collector

    31843

    SecureTrack client

    Central cluster (only if deploying remote collector)

  8. Click CREATE. The instance group will be created.

Create a Health Check

Leave default values for all parameters not mentioned.

  1. Navigate to Health checks.

  2. Click +CREATE HEALTH CHECK.

  3. Select/Enter values:

    • Name

    • Protocol: TCP

    • Port: 31443

  4. Leave default values for health criteria, or change according to your requirements.

  5. Click CREATE.

Create The GCP Load Balancer

After launching the instance, you need to create a separate load balancer for each of the ports you are going to need. These ports are listed in the Target column in the table below. Some ports are only needed if you are deploying a remote collector.

Protocol

 

Source

 

Target

 

Purpose

 

Cluster Type

(Central Cluster/Remote Collectors)

TCP

443

31443

Mandatory. UI connectivity.

Central cluster

TCP

9099

31099

OPM devices

Central cluster

TCP

601 32514

Unencrypted TCP syslogs

Central cluster/
Remote collector

TCP 6514 31514 TLS encrypted TCP syslogs

Central cluster/
Remote collector

TCP

8422

31422

Event streaming

Remote collector

TCP

9090

31090

Allows central cluster to receive data from remote collector cluster

Remote collector

TCP

31843

8443

RC sync - Revisions and control

Remote collector

TCP

61617

31617

JMS - configuration metrics and device statuses

Remote collector

TCP

8443

31843

SecureTrack client

Central cluster (only if deploying remote collector)

Repeat this step for each port in the table that you are going to need.

For more information, see the vendor documentation Set up the load balancer.

Leave default values for all parameters not mentioned.

  1. Navigate to Network Services > Load balancing.

  2. Click +CREATE LOAD BALANCER.

  3. Under Network Load Balancer (TCP/SSL), click START CONFIGURATION.
  4. Select the following options:

    • Internet facing or internal only: Only between my VMs.

    • Multiple regions or single region: Single region only
    • Load Balancer type: Proxy
  5. Click Continue.
  6. Enter/Select the following information:
    • Load Balancer name
    • Region
    • Network
  7. Click Backend configuration.

  8. In the Backend Configuration section, enter/select the following:

    • Backend type: Instance group

    • Protocol: TCP

    • Named port: https (the name of the port you created for the instance)

    • Timeout: 30

  9. In the Instance Groups section, enter/select the following:

    • Instance group: The instance group created previously

    • Port numbers: The target port

    • Health check: The health check created previously

  10. In the New Frontend IP and port section, enter/select the following:

    • Name:

    • Subnetwork: The network you are using
    • IP address: The IP Address of the machine you want to send the traffic to

    • Port number: The source port

  11. Click DONE.

  12. Click CREATE.

Create Firewall Rules

Repeat this step for each of the ports you open.

For more information, see the vendor documentation Create a new firewall rule.

  1. Navigate to VPC Network > Firewall

  2. Click CREATE FIREWALL RULE

  3. Configure:

    • Name

    • Logs: Off

    • Network: Default

    • Priority: 1000

    • Direction of traffic: Ingress

    • Action on match: Allow

    • Targets: Specified target tags

    • Target tags: (example) syslog-ingress

    • Source filter: IPv4 ranges

    • Source IPv4 ranges: 0.0.0.0/0

    • Second source filter: None

    • Protocols and ports: Specified protocols and ports - UDP 30514

  4. Click CREATE.

  5. Navigate to VM instances and select the VM created previously

  6. Click EDIT

  7. Go to Network Tags and add the target tag specified above

Connect to the VMs Using SSH

To connect via ssh you can add an existing ssh key to GCP via UI or use the gcloud utilily to generate key and add it.

To generate a key using gcloud:

  1. Download gcloud to your workstation.

  2. Run gcloud auth login command to login to your account. Variable <PROJECT_ID> can be set once globally to avoid entering it each time.

    gcloud config set project "<PROJECT_ID>"
    gcloud config set project "<PROJECT_ID>"
    gcloud compute ssh --zone "<ZONE>" "<VM_INSTANCE_NAME>" 
    gcloud compute ssh --zone "<ZONE>" "<VM_INSTANCE_NAME>"

Configure Partitions

If not done already, set up partitions according to the Prerequisites.

Configure the Operating System

  1. If you are not currently logged in as user root, do so now.

    [<ADMIN> ~]$ su -
    su -
  2. If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.

    [<ADMIN> ~]# hostnamectl set-hostname <mynode>
    hostnamectl set-hostname <mynode>
  3. Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).

    [<ADMIN> ~]# echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
    echo 'export PATH="${PATH}:/usr/local/bin"' | sudo tee -a /root/.bashrc > /dev/null
  4. Synchronize your machine time with a trusted NTP server. Follow the steps in Configuring NTP Using Chrony.

  5. Configure the server timezone.

    [<ADMIN> ~]# timedatectl set-timezone <timezone>
    timedatectl set-timezone <timezone>

    where <timezone> is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command.

    [<ADMIN> ~]# timedatectl list-timezones
    timedatectl list-timezones
  6. Upgrade the kernel:

    [<ADMIN> ~]# dnf upgrade
    dnf upgrade
  7. Reboot the machine and log in.
  8. Install tmux and rsync:

    [<ADMIN> ~]# dnf install -y rsync tmux
    dnf install -y rsync tmux
  9. Disable the firewall:

    [<ADMIN> ~]# systemctl stop firewalld
    systemctl stop firewalld
    [<ADMIN> ~]# systemctl disable firewalld
    systemctl disable firewalld
  10. Create the TOS load module configuration file /etc/modules-load.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/modules-load.d/tufin.conf
    vi /etc/modules-load.d/tufin.conf
  11. Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.

    br_netfilter
    overlay
    ebtables
    ebtable_filter
    br_netfilter overlay ebtables ebtable_filter
  12. Load the above modules now:

    [<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -a 
    cat /etc/modules-load.d/tufin.conf |xargs modprobe -a

    Look carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.

  13. Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:

    [<ADMIN> ~]# vi /etc/sysctl.d/tufin.conf
    vi /etc/sysctl.d/tufin.conf
  14. Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.

    net.bridge.bridge-nf-call-iptables = 1
    fs.inotify.max_user_watches = 1048576
    fs.inotify.max_user_instances = 10000
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1 fs.inotify.max_user_watches = 1048576 fs.inotify.max_user_instances = 10000 net.ipv4.ip_forward = 1
  15. Apply the above kernel settings now:

    [<ADMIN> ~]# sysctl --system
    sysctl --system
For maximum security, we recommend only installing official security updates and security patches for your Linux distribution, as well as the RPMs specifically mentioned in this section.

Data Nodes Only. Mount The etcd Database to A Separate Disk

The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance. This will require some down time as you are going to have to shut down TOS before separating the disks.

Preliminary Preparations

  1. Switch to the root user.

    [<ADMIN> ~]$ sudo su -
    sudo su -
  2. Install the rsync RPM.

    [<ADMIN> ~]$ dnf install rsync
    dnf install rsync
  3. Find the name of the last disk added to the VM instance.

    [<ADMIN> ~]# lsblk -ndl -o NAME
    lsblk -ndl -o NAME

    The output returns the list of disks on the VM instance. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.

  4. Save the name of the last disk in a separate location. You will need it later for verification purposes.

Add a disk to the VM Instance.

  1. In GCP, go to the VM instance, and on the Details page, click Edit.

  2. Under Additional Disks, click Add new disk.

  3. Configure the following settings:

    • Name:

    • Source Type: Blank

    • Region: Same as VM instance

    • Location: Same as VM instance

    • Disk Type: ssd persistent disk

    • Storage: At least 50 GB

  4. Click Done and then Save.

Mount The New Disk

  1. Run the tmux command.

    [<ADMIN> ~]$ tmux new-session -s etcd
    tmux new-session -s etcd
  1. Log into the data node as the root user.

  2. Run the tmux command.

    [<ADMIN> ~]$ tmux new-session -s etcd
    tmux new-session -s etcd
  3. Verify that the new disk is recognized by the operating system.

    [<ADMIN> ~]# lsblk
    lsblk
    [<ADMIN> ~]# ls -l /dev/sd*
    ls -l /dev/sd*

    Compare the output with the name of the disk you saved in the preliminary preparations, and verify that the disk name it returned ends with the next letter in the alphabet. For example, if the name you saved was sdb the output should return sdc. This indicates that the operating system recognizes the new disk.

  4. Create a variable with the block device path of the new disk.

    [<ADMIN> ~]# BLOCK_DEV="/dev/sd<>"
    BLOCK_DEV="/dev/sd<>"

    where <> represents the letter of the new disk.

  5. Generate a UUID for the block device of the new disk.

    [<ADMIN> ~]# BLOCK_UUID="$(uuidgen)"
    BLOCK_UUID="$(uuidgen)"
  6. Create a primary partition on the new disk.

    [<ADMIN> ~]# parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
    parted -s -a optimal ${BLOCK_DEV} mklabel msdos -- mkpart primary ext4 1MiB 100%
  7. Verify that the partition was created.

    [<ADMIN> ~]# parted -s ${BLOCK_DEV} print
    parted -s ${BLOCK_DEV} print
  8. Format the partition as ext4.

    [<ADMIN> ~]# mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
    mkfs.ext4 -L ETCD -U ${BLOCK_UUID} ${BLOCK_DEV}1
  9. Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).

    [<ADMIN> ~]# blkid | grep "$BLOCK_UUID"
    blkid | grep "$BLOCK_UUID"
  10. Create the mount point of the etcd database.

    [<ADMIN> ~]# mkdir -p /var/lib/rancher/k3s/server/db
    mkdir -p /var/lib/rancher/k3s/server/db
  11. Set the partition to mount upon operating system startup.

    [<ADMIN> ~]# echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
    echo "UUID=${BLOCK_UUID} /var/lib/rancher/k3s/server/db ext4 defaults 0 0" >> /etc/fstab
  12. Load the changes to the filesystem.

    [<ADMIN> ~]# systemctl daemon-reload
    systemctl daemon-reload
  13. Mount the partition that was added to /etc/fstab.

    [<ADMIN> ~]# mount /var/lib/rancher/k3s/server/db
    mount /var/lib/rancher/k3s/server/db

    If the output is not empty, stop the procedure. The etcd disk cannot be mounted. Review what was missed in the previous steps.

  14. Verify the partition has been mounted (the output should return the block device and mount point).

    [<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"
    mount | grep "/var/lib/rancher/k3s/server/db"

    If the output is empty, stop the procedure. The etcd disk is not mounted. Review what was missed in the previous steps.

  15. You can now safely exit the tmux session:

    [<ADMIN> ~]# exit
    exit

What Next?