On This Page
Update TufinOS 3.x to 3.110
Overview
This procedure is for updating TufinOS 3 to the latest version.
TufinOS updates are additions to the current version of the operating system. Unlike upgrades, where you replace the operating system completely, an update is used for enhancing TufinOS with the latest security and performance features as well as address any issues in order to provide a better working experience.
This procedure does not require reinstalling TOS.
The type of procedure you need to perform will depend on your deployment:
-
High Availability:
-
Without downtime - Update the worker nodes, and then update TufinOS on each data node separately. For more information on HA, see High Availability.
After updating a data node, run tos status and check if the System Status is ok and all the items listed under Components appear as ok. If this is not the case, wait for the database to sync before proceeding to update the next node. -
With downtime - Power down TOS on all nodes in the cluster and then proceed to update TufinOS on all your nodes.
-
-
Single data node cluster: First update the worker nodes and then update the data node. During the update itself, there will be some downtime as all TOS processes will need to be stopped and then restarted.
-
High Availability + Remote Collector clusters:
-
Central Cluster - See High Availability bullet above
-
Remote Collector Clusters - First update the worker nodes, and then update TufinOS on the data node. Repeat for each remote cluster. For more information on Remote Collector clusters, see Remote Collectors.
High availability is not supported for Remote Collector clusters. -
-
Single data node cluster + Remote Collector clusters: First update the central cluster: worker nodes and then data node. Afterwards, repeat for each Remote Collector cluster
Preliminary Preparations
-
Verify that you can update directly from your current TufinOS version
-
Check your current TufinOS version:
Your TufinOS release appears in the output. If the command is not recognized, you have a non-TufinOS operating system and cannot update using this procedure.
- If there is a supported update path for your TufinOS release, continue with the update. Otherwise, update first to a supported release.
-
-
Check the cluster health.
-
On the primary data node, check the following status.
-
On the same node or nodes, check the TOS status.
In the output under the line k3s.service - Aurora Kubernetes, two lines should appear - Loaded... and Active... similar to the example below. If they appear, continue with the next step, otherwise contact Tufin Support for assistance.
Example output:
[<ADMIN> ~]$ sudo systemctl status k3s [root@TufinOS ~]# systemctl status k3s Redirecting to /bin/systemctl status k3s.service ● k3s.service - Aurora Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 17:14:38 IDT; 1 day 18h ago Docs: https://k3s.io Process: 1241 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Process: 1226 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Main PID: 1250 (k3s-server) Tasks: 1042 Memory: 2.3G
In the output, if the System Status is Ok and all the items listed under Components appear as ok, continue with the next steps. Otherwise contact Tufin Support for assistance.
Example output:
[<ADMIN> ~]$ sudo tos status Tufin Orchestration Suite 2.0 System Status: Ok System Mode: Multi Node Nodes: 1 Master, 1 Worker. Total 2 nodes. Nodes are healthy. Components: Node: Ok Cassandra: Ok Mongodb: Ok Mongodb_sc: Ok Nats: Ok Neo4j: Ok Postgres: Ok Postgres_sc: Ok
-
Downloads
-
Download the TufinOS 3.110 update package from the Download Center to your local machine.
-
Log in to the node you are updating as the tufin-admin user.
-
Transfer the TufinOS update package to /opt/tufin/data.
-
Extract the TOS run file the archive.
The run file name includes the release, version, and build number.
TufinOS update file example:
TufinOS-4.20-639387-x86_64-8.8-Final-Update.run.tgz
-
Verify the integrity of the TufinOS installation package.
[<ADMIN> ~]# sha256sum -c TufinOS-4.20-639387-x86_64-8.8-Final-Update.run.sha256
sha256sum -c TufinOS-4.20-639387-x86_64-8.8-Final-Update.run.sha256The output should return OK
Update TufinOS
Update TufinOS in a High Availability Cluster
-
Update without downtime
-
Update worker nodes
-
Transfer the TufinOS update file <update_file> to the node you are upgrading, to directory /opt/tufin/data.
-
Switch to the root user
-
Go to /opt/tufin/data.
-
Run the screen command.
-
Execute the TufinOS update file:
-
When prompted to continue the update, enter yes.
Do not interrupt the update process. Wait until the successful completion message appears.
-
After the update is complete, reboot the node:
-
Log in to the primary data node.
-
-
Update data nodes
Repeat this procedure for all three data nodes.
After updating a data node, run tos status and check if the System Status is ok and all the items listed under Components appear as ok. If this is not the case, wait for the database to sync before proceeding to update the next node.-
Backup your TOS data.
If you are going to perform this procedure over multiple maintenance periods, create a new backup each time.
-
Create the backup using tos backup create:
-
You can check the backup creation status using tos backup status, which shows the status of backups in progress. Wait until completion before continuing.
-
Run the following command to display the list of backups saved on the node:
-
Check that your backup file appears in the list, and that the status is "Completed".
-
Run the following command to export the backup to a file:
-
If your backup files are saved locally:
-
Run sudo tos backup export to save your backup file from a TOS backup directory as a single
.gzip
file. If there are other backups present, they will be included as well. -
Transfer the exported
.gzip
file to a safe, remote location.Make sure you have the location of your backups safely documented and accessible, including credentials needed to access them, for recovery when needed.
After the backup is exported, we recommend verifying that the file contents can be viewed by running the following command:
-
Example output:
[%=Local.admin-prompt% sudo tos backup create [Aug 23 16:18:42] INFO Running backup Backup status can be monitored with "tos backup status"
Example output:
[<ADMIN> ~]$ sudo tos backup status Found active backup "23-august-2021-16-18"
Example output:
[<ADMIN> ~]$ sudo tos backup list ["23-august-2021-16-18"] Started: "2021-08-23 13:18:43 +0000 UTC" Completed: "N/A" Modules: "ST, SC" HA mode: "false" TOS release: "21.2 (PGA.0.0) Final" TOS build: "21.2.2100-210722164631509" Expiration Date: "2021-09-22 13:18:43 +0000 UTC" Status: "Completed"
The command creates a single backup file.
[<ADMIN> ~]$ sudo tos backup export [Aug 23 16:33:42] INFO Preparing target dir /opt/tufin/backups [Aug 23 16:33:42] INFO Compressing... [Aug 23 16:33:48] INFO Backup exported file: /opt/tufin/backups/backup-21-2-pga.0.0-final-20210823163342.tar.gzip [Aug 23 16:33:48] INFO Backup export has completed
-
-
Transfer the TufinOS update file <update_file> to the node you are upgrading, to directory /opt/tufin/data.
-
Get the name of the node you want to update by running the following command on any of the data nodes.
A list of nodes which currently exist in the cluster is displayed.
-
In the output, note the name of the node you want to update.
-
Run the following command:
-
Switch to the root user
-
Go to /opt/tufin/data.
-
Run the screen command.
-
Execute the TufinOS update file:
-
When prompted to continue the update, enter yes.
Do not interrupt the update process. Wait until the successful completion message appears.
-
After the update is complete, run the following command:
-
After the update is complete, reboot the node:
-
Log in to the primary data node.
-
Check the TOS status.
-
Run the following command on the node you updated
-
In the output, check if the System Statusok and all the items listed under Components appear as ok. If this is not the case, contact Tufin Support for assistance.
Example output:
[<ADMIN> ~]$ sudo tos status [Aug 23 16:34:12] INFO Checking cluster health status TOS Aurora System Status: "Ok" Cluster Status: Status: "Ok" Mode: "High Availability" Nodes: - ["User1"] Type: "Primary" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 25% - ["User2"] Type: "Ha Data Node" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 7.6% - ["User3"] Type: "Ha Data Node" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 7.4% Databases: - ["cassandra"] Status: "Ok" - ["kafka"] Status: "Ok" - ["mongodb"] Status: "Ok" - ["mongodb_sc"] Status: "Ok" - ["ongDb"] Status: "Ok" - ["postgres"] Status: "Ok" - ["postgres_sc"] Status: "Ok" Backup Storage: Location: "Local s3:http://minio.default.svc:9000/velerok8s/restic/default " Status: "Ok" Last compatibe healthy backup Timestamp: 2023-08-21 12:01:07 +0000 UT Registry: Expiration ETA: 817 days Status: "Ok"
-
where <NODENAME> is the name of the node you want to update.
where <NODENAME> is the name of the node you want to update.
All TOS processes will be restarted on all the data nodes in the cluster, and you will be able to resume using TOS.
-
-
-
Update with downtime
-
Backup your TOS data.
If you are going to perform this procedure over multiple maintenance periods, create a new backup each time.
-
Create the backup using tos backup create:
-
You can check the backup creation status using tos backup status, which shows the status of backups in progress. Wait until completion before continuing.
-
Run the following command to display the list of backups saved on the node:
-
Check that your backup file appears in the list, and that the status is "Completed".
-
Run the following command to export the backup to a file:
-
If your backup files are saved locally:
-
Run sudo tos backup export to save your backup file from a TOS backup directory as a single
.gzip
file. If there are other backups present, they will be included as well. -
Transfer the exported
.gzip
file to a safe, remote location.Make sure you have the location of your backups safely documented and accessible, including credentials needed to access them, for recovery when needed.
After the backup is exported, we recommend verifying that the file contents can be viewed by running the following command:
-
Example output:
[%=Local.admin-prompt% sudo tos backup create [Aug 23 16:18:42] INFO Running backup Backup status can be monitored with "tos backup status"
Example output:
[<ADMIN> ~]$ sudo tos backup status Found active backup "23-august-2021-16-18"
Example output:
[<ADMIN> ~]$ sudo tos backup list ["23-august-2021-16-18"] Started: "2021-08-23 13:18:43 +0000 UTC" Completed: "N/A" Modules: "ST, SC" HA mode: "false" TOS release: "21.2 (PGA.0.0) Final" TOS build: "21.2.2100-210722164631509" Expiration Date: "2021-09-22 13:18:43 +0000 UTC" Status: "Completed"
The command creates a single backup file.
[<ADMIN> ~]$ sudo tos backup export [Aug 23 16:33:42] INFO Preparing target dir /opt/tufin/backups [Aug 23 16:33:42] INFO Compressing... [Aug 23 16:33:48] INFO Backup exported file: /opt/tufin/backups/backup-21-2-pga.0.0-final-20210823163342.tar.gzip [Aug 23 16:33:48] INFO Backup export has completed
-
-
Shut down TOS.
-
Run the command:
This process may take time.
- Check that all processes have been stopped successfully. Run the command:
-
Wait until all the pods, with the exception of the service controller, ps-proxy, and reportpack pods, have disappeared from the list or reached a status of Completed. The service controller, ps-proxy, and reportpack pods which can continue running.
All TOS processes are now stopped on all the data nodes in the cluster.
A list of all pods is displayed.
Example
-
-
Repeat steps 4-8 for each node in the cluster.
-
Transfer the TufinOS update file <update_file> to the node you are upgrading, to directory /opt/tufin/data.
-
Switch to the root user
-
Go to /opt/tufin/data.
-
Run the screen command.
-
Execute the TufinOS update file:
-
When prompted to continue the update, enter yes.
Do not interrupt the update process. Wait until the successful completion message appears.
-
After the update is complete, reboot the node:
-
After updating all data nodes, log in to the primary data node.
-
Restart TOS on the primary data node:
All TOS processes will be restarted on all the data nodes in the cluster, and you will be able to resume using TOS.
-
Update TufinOS in Single Data Node Cluster
-
Update worker nodes
-
Transfer the TufinOS update file <update_file> to the node you are upgrading, to directory /opt/tufin/data.
-
Switch to the root user
-
Go to /opt/tufin/data.
-
Run the screen command.
-
Execute the TufinOS update file:
-
When prompted to continue the update, enter yes.
Do not interrupt the update process. Wait until the successful completion message appears.
-
After the update is complete, reboot the node:
-
Log in to the primary data node.
-
-
Update primary data node
-
Backup your TOS data.
If you are going to perform this procedure over multiple maintenance periods, create a new backup each time.
-
Create the backup using tos backup create:
-
You can check the backup creation status using tos backup status, which shows the status of backups in progress. Wait until completion before continuing.
-
Run the following command to display the list of backups saved on the node:
-
Check that your backup file appears in the list, and that the status is "Completed".
-
Run the following command to export the backup to a file:
-
If your backup files are saved locally:
-
Run sudo tos backup export to save your backup file from a TOS backup directory as a single
.gzip
file. If there are other backups present, they will be included as well. -
Transfer the exported
.gzip
file to a safe, remote location.Make sure you have the location of your backups safely documented and accessible, including credentials needed to access them, for recovery when needed.
After the backup is exported, we recommend verifying that the file contents can be viewed by running the following command:
-
Example output:
[%=Local.admin-prompt% sudo tos backup create [Aug 23 16:18:42] INFO Running backup Backup status can be monitored with "tos backup status"
Example output:
[<ADMIN> ~]$ sudo tos backup status Found active backup "23-august-2021-16-18"
Example output:
[<ADMIN> ~]$ sudo tos backup list ["23-august-2021-16-18"] Started: "2021-08-23 13:18:43 +0000 UTC" Completed: "N/A" Modules: "ST, SC" HA mode: "false" TOS release: "21.2 (PGA.0.0) Final" TOS build: "21.2.2100-210722164631509" Expiration Date: "2021-09-22 13:18:43 +0000 UTC" Status: "Completed"
The command creates a single backup file.
[<ADMIN> ~]$ sudo tos backup export [Aug 23 16:33:42] INFO Preparing target dir /opt/tufin/backups [Aug 23 16:33:42] INFO Compressing... [Aug 23 16:33:48] INFO Backup exported file: /opt/tufin/backups/backup-21-2-pga.0.0-final-20210823163342.tar.gzip [Aug 23 16:33:48] INFO Backup export has completed
-
-
Shut down TOS.
-
Run the command:
This process may take time.
- Check that all processes have been stopped successfully. Run the command:
-
Wait until all the pods, with the exception of the service controller, ps-proxy, and reportpack pods, have disappeared from the list or reached a status of Completed. The service controller, ps-proxy, and reportpack pods which can continue running.
All TOS processes are now stopped on all the data nodes in the cluster.
A list of all pods is displayed.
Example
-
-
Transfer the TufinOS update file <update_file> to the node you are upgrading, to directory /opt/tufin/data.
-
Switch to the root user
-
Go to /opt/tufin/data.
-
Run the screen command.
-
Execute the TufinOS update file:
-
When prompted to continue the update, enter yes.
Do not interrupt the update process. Wait until the successful completion message appears.
-
After the update is complete, reboot the node:
-
Log in to the primary data node.
-
Restart TOS on the primary data node:
All TOS processes will be restarted on all the data nodes in the cluster, and you will be able to resume using TOS.
-
Update TufinOS in a High Availability Cluster with Remote Collector Clusters
Follow the procedures above to update the nodes in the Central cluster and the Remote Collector clusters.
After updating all nodes:
-
Check that the Remote Collector is still connected.
-
Run the following command:
-
If the Remote Collector cluster is not connected, run the following commands to connect it to the Central cluster, where:
- CENTRAL-CLUSTER-VIP is the external IP address (VIP) of your central cluster
- REMOTE-CLUSTER-VIP is the external IP address (VIP) of the server you want to connect (i.e. the current server)
- REMOTE-CLUSTER-NAME is any alphanumeric string you choose; quotes are not used so you cannot embed spaces
- OTP is the one-time password returned from the central cluster in the previous step
[<ADMIN> ~]$ sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP
sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP
The output shows all connected clusters.
Example output:
-
Update TufinOS in Single Data Node Cluster with Remote Collector Clusters
Follow the procedures above to update the nodes in the Central cluster and the Remote Collector clusters.
After updating all nodes:
-
Check that the Remote Collector is still connected.
-
Run the following command:
-
If the Remote Collector cluster is not connected, run the following commands to connect it to the Central cluster, where:
- CENTRAL-CLUSTER-VIP is the external IP address (VIP) of your central cluster
- REMOTE-CLUSTER-VIP is the external IP address (VIP) of the server you want to connect (i.e. the current server)
- REMOTE-CLUSTER-NAME is any alphanumeric string you choose; quotes are not used so you cannot embed spaces
- OTP is the one-time password returned from the central cluster in the previous step
[<ADMIN> ~]$ sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP
sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP
The output shows all connected clusters.
Example output:
-