On This Page
Moving TOS
Overview
This procedure explains how to move your TOS data to new servers. This is needed, for example, when you want to replace your existing hardware either because of hardware failures, or because you've purchased newer and better hardware for your TOS deployment.
To move your data, you must:
-
Backup your data
-
Recreate your TOS environment on the new servers
-
Restore your data
The process will entail downtime, and therefore it is important to plan the move for your standard change windows. The length of the downtime will depend on the amount of data being moved.
Requirements
-
Operating system installation files.
-
TOS Installation files - use the same release that is already installed.
-
Multi-node environment - you will need to add the same number of nodes to the new cluster as you had in the original cluster. For more information, see Worker Nodes.
-
High availability environment - You will need to add two additional data nodes to the new cluster.
-
Network settings such as primary VIP and syslog VIPs will be restored to the target servers to the same value they had previously. All network requirements specified on the original install procedure still apply, so prepare for any possible changes before you start.
Move the TOS Database
You can backup your TOS cluster locally or to an external backup storage.
Local Backup
-
Backup your existing TOS cluster.
If your system runs on TufinOS, all TOS commands must be run using sudo, because user root is not available on this operating system.
-
Create the backup.
Parameters
Parameter
Description
Mandatory/Optional
<NAME>
Name for the backup that appears in the tos backup list command. If not specified, a default name will be given, containing the date and time.
Optional
<TIME TO LIVE>
Time to Live (TTL) in hours, minutes, and seconds. When the specified time has passed, the backup will automatically be deleted and its disk space released. If no value is set, the default TTL is 720 hours (30 days).
Format: XhYmZs
Optional
You can only create backup files if the backup directory has sufficient storage. -
Monitor the backup status.
-
To view the backup in progress, run:
-
To view the completed backup, run:
Wait until completion before continuing.
-
For more information, see Backup and Restore
-
-
Export the backup.
Export all your backup files from the TOS backup directory to a single
.gzip
file in a remote location. All the backup files and your backup policy will be saved to a single backup archive file in the specified target location. If the target location is not specified, the archive will be created in /opt/tufin/backups.The target location cannot be the new servers. After installing the operating system (step 3), all data on the servers will be erased.
The backup archive file will be named in the following format: backup-<TOSVER>-YYYYMMDDHHMMSS.tar.gzip, where <TOSVER> is your SecureTrack version number.
-
Switch the current directory to
/opt
. -
Check disk usage of your backup files:
-
Check available space in the target:
-
Run the screen command:
-
Export the backup files:
where
<path>
is the target path.The files will be compressed in the export but the degree of compression cannot be known in advance. Ensure that you have at least the amount of space available in your target as is taken up by the original files.
where
<PATH>
is the destination directory (or path to an external backup storage) in which to place the archived backup (directory only, without a file name). If the path does not exist, it will be created automatically.After the backup is exported, we recommend verifying that the file contents can be viewed by running the following command:
-
-
High Availability and Multi-node processing only. Add additional nodes.
-
For High Availability: Add two additional data nodes.
-
For Multi-node processing: Add the same number of additional worker nodes that you had in the original cluster.
To add the additional nodes:
-
On the primary data node run:
where
<TYPE>
is worker or data, depending on the type of node you want to add.On completion, a new command string appears, which you will need to run on the new node within one hour. If the allocated time expires, you will need to repeat the current step.
-
Log in to the CLI of the server to be added as a new node in the cluster.
- On the new node, run the command string displayed previously on the primary data node in step 2 above. If the allocated time has expired, you will need to start from the beginning.
- Verify that the node was added by running sudo tos cluster node list on the primary data node.
-
-
High Availability Only. Set up High Availability on the new servers.
-
On the primary data node:
Replication of data will commence. The time to completion will vary depending on the size of your database.
On completion, TOS will be in high availability mode.
-
Verify that HA is active by running sudo tos status.
-
We recommend defining a notification to inform you in the event of a change in the health of your cluster - see TOS Monitoring.
-
-
Import the backup to the new primary data node.
-
Run the screen command:
-
Import the backup files:
[<ADMIN> ~]$ sudo tos backup import [-s|--source <FULLPATH>
sudo tos backup import [-s|--source <FULLPATH>where
<FULLPATH>
is the full path including the backup archive file name, created previously by thetos backup export
command.If you saved your backup externally, you must connect to the external backup storage.
-
-
Restore the backup to the new primary data node.
If you are restoring a central cluster with remote collectors attached, you must reconnect all remote collectors after the restore.
-
Run the screen command:
-
Run the following command to view the list of backups. You will select the backup to restore from this list.
-
Restore the backup:
Where
<BACKUP-NAME>
is the name of the backup you created.Tufin auto-generated certificates will be automatically re-created on the first connection following the restore.
-
External Backup Storage
-
Backup your existing TOS cluster.
If your system runs on TufinOS, all TOS commands must be run using sudo, because user root is not available on this operating system.
-
Create the backup.
Parameters
Parameter
Description
Mandatory/Optional
<NAME>
Name for the backup that appears in the tos backup list command. If not specified, a default name will be given, containing the date and time.
Optional
<TIME TO LIVE>
Time to Live (TTL) in hours, minutes, and seconds. When the specified time has passed, the backup will automatically be deleted and its disk space released. If no value is set, the default TTL is 720 hours (30 days).
Format: XhYmZs
Optional
You can only create backup files if the backup directory has sufficient storage. -
Monitor the backup status.
-
To view the backup in progress, run:
-
To view the completed backup, run:
Wait until completion before continuing.
-
For more information, see Backup and Restore
-
-
High Availability and Multi-node processing only. Add additional nodes.
-
For High Availability: Add two additional data nodes.
-
For Multi-node processing: Add the same number of additional worker nodes that you had in the original cluster.
To add the additional nodes:
-
On the primary data node run:
where
<TYPE>
is worker or data, depending on the type of node you want to add.On completion, a new command string appears, which you will need to run on the new node within one hour. If the allocated time expires, you will need to repeat the current step.
-
Log in to the CLI of the server to be added as a new node in the cluster.
- On the new node, run the command string displayed previously on the primary data node in step 2 above. If the allocated time has expired, you will need to start from the beginning.
- Verify that the node was added by running sudo tos cluster node list on the primary data node.
-
-
High Availability Only. Set up High Availability on the new servers.
-
On the primary data node:
Replication of data will commence. The time to completion will vary depending on the size of your database.
On completion, TOS will be in high availability mode.
-
Verify that HA is active by running sudo tos status.
-
We recommend defining a notification to inform you in the event of a change in the health of your cluster - see TOS Monitoring.
-
-
Restore the backup to the new primary data node.
If you are restoring a central cluster with remote collectors attached, you must reconnect all remote collectors after the restore.
-
Run the screen command:
-
Run the following command to view the list of backups. You will select the backup to restore from this list.
-
Restore the backup:
Where
<BACKUP-NAME>
is the name of the backup you created.Tufin auto-generated certificates will be automatically re-created on the first connection following the restore.
-
Post-Restore Configurations
-
If network settings have changed, such as VIP addresses, update them now - see Change VIP Addresses. The network requirements specified in the original install procedure still apply.
-
Create a new scheduled backup policy.
Was this helpful?
Thank you!
We’d love your feedback
We really appreciate your feedback
Send this page to a colleague