On This Page
Prepare a GCP VM Instance
Overview
This procedure explains how to prepare a Google Cloud Platform (GCP) virtual machine instance for TOS.
Syslog Destination
Due to a GCP limitation, UDP syslogs cannot be sent to the load balancer and must instead be sent directly to the node - see Sending Additional Information Using Syslog.
High Availability (HA)
High availability is supported for GCP over three availability zones, giving you a higher level of resilience and availability when deploying on this cloud platform. Note that all availability zones must be in the same region. See High availability.
Remote Collectors (RCs)
Remote collectors can be deployed on GCP. They are supported in and between different GCP regions.
Prerequisites
General Requirements
-
This procedure must be performed by an experienced Linux administrator with knowledge of network and storage configuration.
-
To ensure optimal performance and reliability, the required resources need to always be available for TOS. If resources become unavailable, this will affect TOS performance.
-
Verify that you have sufficient resources (CPUs, disk storage and main memory) to run TOS. The required resources are determined by the size of your system. See Sizing Calculation for a Clean Install.
-
IP tables version 1.8.5 and above. IP tables must be reserved exclusively for TOS Aurora and cannot be used for any other purpose. During installation, any existing IP tables configurations will be flushed and replaced.
-
We do not recommend installing on your server 3rd party software not specified in the current procedure. It may impact TOS functionality and features, and it is your responsibility to verify that it is safe to use.
- All nodes in the cluster must be running the same operating system.
- The node's network IP must be on the same subnet as the cluster primary VIP.
-
See the Important Installation Information in the R25-1 Release Notes.
Operating System Requirements
-
OS distribution:
-
Red Hat Enterprise Linux 8.10
-
Rocky Linux 8.10
-
-
Disks:
-
Select a storage type of SSD. Take into consideration that TOS requires 7,500 IOPS and the throughput expected will average 250MB/s with bursts of up to 700MB/s.
-
The disk for the operating system and TOS data requires three partitions: /opt, /var and /tmp. Total storage size is determined by the sizing information sent by Tufin.
-
Partition sizes:
Data
Worker
/opt Minimum: 400 GB 50 GB /var 200 GB 50 GB /tmp 25 GB 15 GB -
Data nodes require an additional disk for etcd. Size: 50 GB
-
We recommend allocating the /opt partition all remaining disk space after you have partitioned the OS disk and moved etcd to a separate disk.
-
-
Secure boot must be disabled.
-
The kernel must be up-to-date
-
SELinux must be disabled (recommended), or configured to run in permissive mode, as described in Enabling SELinux in Permissive Mode.
-
Language: en-US
-
You must have permissions to execute TOS CLI commands located in directory
/usr/local/bin/tos
and to use sudo if necessary. -
To run TOS CLI commands without specifying the full path (
/usr/local/bin/tos
), your environment path must be modified accordingly. -
The server timezone must be set.
-
The rpcbind service is disabled by default when updating to this version, preventing NFS 3 from working. However, it can be enabled by running the following commands in sequence:
Network Requirements
-
You must allow access to required Ports and Services on the firewall.
-
Allocate a 24-bit CIDR subnet for the Kubernetes service network and a16-bit CIDR subnet for the Kubernetes pods network (10.244.0.0/16 is used by default).
The pods and services networks must be inside the following private networks: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. In addition, ensure that the dedicated CIDR for the service network and pods network don't overlap with:
-
Each other
-
The physical addresses of your TOS servers (see below)
-
Your external load balancer IP(s)
-
Any other subnets communicating with TOS or with TOS nodes
-
-
All TOS nodes need to be on the same subnet
Tufin Orchestration Suite must only be installed in an appropriately secured network and physical location. Only authorized users should be granted access to TOS products and the operating system on the server.
Launch The Instance
Read and understand the Prerequisites before you start.
For additional help, refer to the official documentation Create and start a VM instance.
-
In the Google Cloud console, navigate to Compute Engine > VM instances
-
Click +CREATE INSTANCE.
-
Configure:
-
Name:
-
Region/Zone: As appropriate
-
Machine family: GENERAL-PURPOSE
-
Series: E2
-
Machine type: Custom
-
Cores / Memory: According to the load-model parameter value received from your account team.
-
-
Under Boot Disk, click CHANGE.
-
Configure the boot disk.
-
Make sure tab Public Images is selected.
-
Operating system: Red Hat Enterprise Linux or Rocky Linux
-
Version:
-
Red Hat Enterprise Linux 8.10
-
Rocky Linux 8.10
-
-
Boot disk type: SSD persistent disk
-
Size: The value received from your account team.
-
-
Click Select. The boot disk settings will be saved.
-
Select Advanced options.
-
Select Networking.
-
Configure Networking:
-
Network tags:
default-allow-ssh
allow-nginx-ingress
-
Hostname: Optional
-
-
(Optional) Host name: Type a host name or use the default device value.
- Click CREATE.
Create a Firewall Rule for Traffic
Create a firewall rule to allow traffic to nginx nodeport and health checking
Leave default values for all parameters not mentioned.
-
Navigate to VPC Network > Firewall
-
Click +CREATE FIREWALL RULE
-
Configure:
-
Name: allow-nginx-ingress
-
Network: default
-
Direction: Ingress
-
Action: Allow
-
Targets: Specified target tags
-
Target tags: allow-nginx-ingress
-
Source filter: IPv4 ranges
-
Source IPv4 ranges: 0.0.0.0/0
-
Second sourced filter: None
-
Protocols and ports: Specified ports and protocols - TCP 31443
-
-
Click CREATE.
Create an Instance Group
For more information about managing instance groups, see the vendor documentation Group unmanaged VMs together.
Leave default values for all parameters not mentioned.
-
Navigate to Compute Engine > Instance groups
-
Click +CREATE INSTANCE GROUP.
-
Select New unmanaged instance group (not the stateless or stateful options).
-
Configure:
-
Name:
-
Location: As defined for your VM
-
Network / Subnetwork: Same network used by the instance or Default
-
VM Instances: Select the new VM created previously
-
-
Under Port mapping, click ADD PORT
-
Enter port details
-
Name: https
-
Number: 31443
-
-
Add additional TCP ports, by repeating the previous two steps, depending on the features you intend to use from the list below.
Port
Purpose
Cluster Type
(Central cluster/Remote collector)
31099
OPM devices
Central cluster
31514
TCP syslogs
Central cluster
31422
Event Streaming
Central cluster/remote collector
8443
RC sync - Revisions and control
Remote collector
31617
JMS - configuration metrics and device statuses
Remote collector
31843
SecureTrack client
Central cluster (only if deploying remote collector)
-
Click CREATE. The instance group will be created.
Create a Health Check
Leave default values for all parameters not mentioned.
-
Navigate to Health checks.
-
Click +CREATE HEALTH CHECK.
-
Select/Enter values:
-
Name
-
Protocol: TCP
-
Port: 31443
-
-
Leave default values for health criteria, or change according to your requirements.
-
Click CREATE.
Create The GCP Load Balancer
After launching the instance, you need to create a separate load balancer for each of the ports you are going to need. These ports are listed in the Target column in the table below. Some ports are only needed if you are deploying a remote collector.
Protocol
|
Source
|
Target
|
Purpose
|
Cluster Type (Central Cluster/Remote Collectors) |
---|---|---|---|---|
TCP |
443 |
31443 |
Mandatory. UI connectivity. |
Central cluster |
TCP |
9099 |
31099 |
OPM devices |
Central cluster |
TCP |
601 | 32514 |
Unencrypted TCP syslogs |
Central cluster/ |
TCP | 6514 | 31514 | TLS encrypted TCP syslogs |
Central cluster/ |
TCP |
8422 |
31422 |
Event streaming |
Remote collector |
TCP |
9090 |
31090 |
Allows central cluster to receive data from remote collector cluster |
Remote collector |
TCP |
31843 |
8443 |
RC sync - Revisions and control |
Remote collector |
TCP |
61617 |
31617 |
JMS - configuration metrics and device statuses |
Remote collector |
TCP |
8443 |
31843 |
SecureTrack client |
Central cluster (only if deploying remote collector) |
Repeat this step for each port in the table that you are going to need.
For more information, see the vendor documentation Set up the load balancer.
Leave default values for all parameters not mentioned.
-
Navigate to Network Services > Load balancing.
-
Click +CREATE LOAD BALANCER.
- Under Network Load Balancer (TCP/SSL), click START CONFIGURATION.
-
Select the following options:
-
Internet facing or internal only: Only between my VMs.
- Multiple regions or single region: Single region only
- Load Balancer type: Proxy
-
- Click Continue.
- Enter/Select the following information:
- Load Balancer name
- Region
- Network
-
Click Backend configuration.
-
In the Backend Configuration section, enter/select the following:
-
Backend type: Instance group
-
Protocol: TCP
-
Named port: https (the name of the port you created for the instance)
-
Timeout: 30
-
-
In the Instance Groups section, enter/select the following:
-
Instance group: The instance group created previously
-
Port numbers: The target port
-
Health check: The health check created previously
-
-
In the New Frontend IP and port section, enter/select the following:
-
Name:
- Subnetwork: The network you are using
-
IP address: The IP Address of the machine you want to send the traffic to
-
Port number: The source port
-
Click DONE.
-
Click CREATE.
Create Firewall Rules
Repeat this step for each of the ports you open.
For more information, see the vendor documentation Create a new firewall rule.
-
Navigate to VPC Network > Firewall
-
Click CREATE FIREWALL RULE
-
Configure:
-
Name
-
Logs: Off
-
Network: Default
-
Priority: 1000
-
Direction of traffic: Ingress
-
Action on match: Allow
-
Targets: Specified target tags
-
Target tags: (example) syslog-ingress
-
Source filter: IPv4 ranges
-
Source IPv4 ranges: 0.0.0.0/0
-
Second source filter: None
-
Protocols and ports: Specified protocols and ports - UDP 30514
-
-
Click CREATE.
-
Navigate to VM instances and select the VM created previously
-
Click EDIT
-
Go to Network Tags and add the target tag specified above
Connect to the VMs Using SSH
To connect via ssh you can add an existing ssh key to GCP via UI or use the gcloud utilily to generate key and add it.
To generate a key using gcloud:
-
Download gcloud to your workstation.
-
Run gcloud auth login command to login to your account. Variable <PROJECT_ID> can be set once globally to avoid entering it each time.
Configure Partitions
If not done already, set up partitions according to the Prerequisites.
Configure the Operating System
-
If you are not currently logged in as user root, do so now.
-
If you want to change the host name or IP of the machine, do so now. Once TOS has been installed, changing the host name or IP address will require reinstalling - see Changing IP Address/Host Names. To change the host name, use the command below, replacing <mynode> with your preferred name.
-
Modify the environment path to run TOS CLI commands without specifying the full path (/usr/local/bin/tos).
-
Synchronize your machine time with a trusted NTP server. Follow the steps in Configuring NTP Using Chrony.
-
Configure the server timezone.
where
<timezone>
is in the format Area/Location. Examples: America/Jamaica, Hongkong, GMT, Europe/Prague. List the time-zone formats that can be used in the command. -
Upgrade the kernel:
- Reboot the machine and log in.
-
Install tmux and rsync:
-
Disable the firewall:
-
Create the TOS load module configuration file
/etc/modules-load.d/tufin.conf
. Example using vi: -
Specify the modules to be loaded by adding the following lines to the configuration file created in the previous step. The modules will then be loaded automatically on boot.
-
Load the above modules now:
[<ADMIN> ~]# cat /etc/modules-load.d/tufin.conf |xargs modprobe -a
cat /etc/modules-load.d/tufin.conf |xargs modprobe -aLook carefully at the output to confirm all modules loaded correctly; an error message will be issued for any modules that failed to load.
-
Create the TOS kernel configuration file /etc/sysctl.d/tufin.conf. Example using vi:
-
Specify the kernel settings to be made by adding the following lines to the configuration file created in the previous step. The settings will then be applied on boot.
-
Apply the above kernel settings now:
Data Nodes Only. Mount The etcd Database to A Separate Disk
The etcd database should be on a separate disk to improve the stability of TOS and reduce latency. Moving the etcd database to a separate disk ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance.
Preliminary Preparations
-
Switch to the root user.
-
Install the rsync RPM.
-
Find the name of the last disk added to the VM instance.
The output returns the list of disks on the VM instance. The last letter of the disk name indicates in which it was added, for example: sda, sdb, sdc.
-
Save the name of the last disk in a separate location. You will need it later for verification purposes.
Add a disk to the VM Instance.
-
In GCP, go to the VM instance, and on the Details page, click Edit.
-
Under Additional Disks, click Add new disk.
-
Configure the following settings:
-
Name:
-
Source Type: Blank
-
Region: Same as VM instance
-
Location: Same as VM instance
-
Disk Type: ssd persistent disk
-
Storage: At least 50 GB
-
-
Click Done and then Save.
-
Add a disk to the VM Instance.
-
In GCP, go to the VM instance, and on the Details page, click Edit.
-
Under Additional Disks, click Add new disk.
-
Configure the following settings:
-
Name:
-
Source Type: Blank
-
Region: Same as VM instance
-
Location: Same as VM instance
-
Disk Type: ssd persistent disk
-
Storage: At least 50 GB
-
-
Click Done and then Save.
-
Mount The New Disk
-
Run the tmux command.
-
Log into the data node as the root user.
-
Run the tmux command.
-
Verify that the new disk is recognized by the operating system.
Compare the output with the name of the disk you saved in the preliminary preparations, and verify that the disk name it returned ends with the next letter in the alphabet. For example, if the name you saved was sdb the output should return sdc. This indicates that the operating system recognizes the new disk.
-
Create a variable with the block device path of the new disk.
where <> represents the letter of the new disk.
-
Generate a UUID for the block device of the new disk.
-
Create a primary partition on the new disk.
-
Verify that the partition was created.
-
Format the partition as ext4.
-
Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).
-
Create the mount point of the etcd database.
-
Set the partition to mount upon operating system startup.
-
Load the changes to the filesystem.
-
Mount the partition that was added to /etc/fstab.
If the output is not empty, stop the procedure. The etcd disk cannot be mounted. Review what was missed in the previous steps.
-
Verify the partition has been mounted (the output should return the block device and mount point).
[<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"
mount | grep "/var/lib/rancher/k3s/server/db"If the output is empty, stop the procedure. The etcd disk is not mounted. Review what was missed in the previous steps.
-
You can now safely exit the tmux session:
What Next?
-
For a first time install, see Install TOS
-
If you need to add a worker node to your existing deployment, see add a worker node
Was this helpful?
Thank you!
We’d love your feedback
We really appreciate your feedback
Send this page to a colleague