Upgrade TOS Aurora to PHF3.0.0 and earlier

Overview

This procedure is for upgrading TOS to R24-1 PHF3.0.0 and earlier. It is identical for all platforms and operating systems.

Before starting, make a backup and export it to an external location you need to roll back. After the upgrade completes successfully, make a new backup as previous backups made on one product version cannot be restored to another.

For all information on this release. including new features, resolved and known issues, EOL announcement and additional information, see the R24-1 Release Notes.

For all other installation and upgrade options, see the appropriate procedure in the table of contents.

How Should I Upgrade My Deployment?

Worker Nodes

Only the primary data node needs to be upgraded. It will automatically upgrade TOS on all other worker and data nodes in the same cluster. The TOS CLI will be upgraded on the other nodes when you next run a TOS CLI command on them.

Remote Clusters

All clusters need to be running the same TOS version. Therefore, make sure to upgrade the primary data node in both the central cluster and remote clusters. Upgrade the central cluster first.

High Availability (HA)

If you are upgrading a high availability deployment, you are going to need to prepare the other data nodes before upgrading TOS. This will require logging into them separately in a different session.

Disaster Recovery (DR)

If you have disaster recovery, first upgrade the active deployment and then upgrade the standby deployment.

Prerequisites

If you use external SSO to authenticate SecureChange, you must upgrade to R24-1 PHF1.0.0 or later. Earlier versions of this release do not support external SSO authentication. See "REL-903" in Bugs - Known and Resolved Issues.

TOS Compatibility and Upgrade Paths

  1. Make sure your current version can be upgraded directly to this version of TOS Aurora - see TOS Release History and Upgrade Paths
  2. If you are using NFS, your backup server needs to be running NFS 4.

Port and Services

  1. If your deployment incorporates remote clusters and you are upgrading from a release lower than R23-1, be aware that an additional port 9090 is now required for successful running of TOS - see remote collector ports.

Downloads

  • Download the TOS R24-1 installation package from the Download Center.

  • The downloaded files are in .tgz format <FILENAME>.tgz.

Required Steps Before Starting

  1. Run the command tos status. In the output, make sure system status is "OK", all nodes are "healthy" and under "Disk usage" /opt is not more than 70%. If any of these conditions are not met, the upgrade will fail.

  2. Make sure you have at least 25 GB free on the primary data node in the /tmp directory.

  3. If you monitor devices managed by a management device/domain that does not have a dedicated license because it inherits its license status from its monitored devices/domains e.g. FMC, FMG, Panorama, make sure all such monitored devices/domains are licensed or removed. Failure to do this will cause the management device/domain to be unlicensed after the upgrade.

  4. If you are upgrading a remote collector cluster:

    • Do not start the upgrade until the upgrade to the central cluster has completed.

    • It must run it under the same release as the central cluster.

    • When upgrading from a remote collector cluster running release R20-2, the cluster will need to be reconnected manually to the central cluster. When upgrading from later versions, this is done automatically.

  5. You must have a valid license before starting the upgrade, otherwise the procedure will abort.

    1. Select Admin > Licenses.

    2. The License Management page appears.

    3. If your license has expired, or if there is no license uploaded, upload a valid license. For more information see, Uploading License Files to TOS (Solution Tiers)

  6. Create a backup of the installation file that was used for your current TOS Aurora installation - /opt/tos/tos.tar - to a directory outside of /opt/tos This is necessary in case there is a need to roll back.

  7. Create a backup of your TOS Aurora data (see One-Time Backup Procedure).

  8. Transfer the run file to the primary data node to directory /opt/tufin/data.
  9. If you use automated provisioning, make sure there are no queued provisioning tasks. You can check this using the waiting_tasks API.

  10. See the R24-1 Pre-Installation Information in the Release Notes

Upgrade Procedure

Read and understand Prerequisites before you start.

  1. Log in to the primary data node using SSH as user tufin-admin or another user with sudo or root privileges.

  2. Check your current version by running the following command:

    [<ADMIN> ~]# tos version
    tos version
  3. Check that your cluster status is healthy.

    1. Run the following command on the primary data node:

      [<ADMIN> ~]# systemctl status k3s
      systemctl status k3s

      Example Output

      [primary data node]# systemctl status k3s
      [root@TufinOS ~]# systemctl status k3s
      Redirecting to /bin/systemctl status k3s.service
      ● k3s.service - Aurora Kubernetes
         Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
         Active: active (running) since Tue 2021-08-24 17:14:38 IDT; 1 day 18h ago
           Docs: https://k3s.io
        Process: 1241 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
        Process: 1226 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
       Main PID: 1250 (k3s-server)
          Tasks: 1042
         Memory: 2.3G
    2. In the output under the line k3s.service - Aurora Kubernetes, check that two lines appear - Loaded... and Active... similar to the example above. If they appear, continue with the next step, otherwise contact Tufin Support for assistance.

  4. Make sure all users are logged out from the browser.

  5. Make a one-time backup.

  6. After your backup has completed, run the following command:

    [<ADMIN> ~]# tmux new-session -s upgrade
    tmux new-session -s upgrade
  7. Extract the TOS run file from its archive.

    [<ADMIN> ~]$ tar -xvzf tos-xxxx-xxxxxxxx-final-xxxx.run.tgz
    tar -xvzf tos-xxxx-xxxxxxxx-final-xxxx.run.tgz
  8. Run the following commands:

    [<ADMIN> ~]# cd /opt/tufin/data/
    cd /opt/tufin/data/
    [<ADMIN> ~]# sh <rls>.run
    sh <rls>.run

    where

    <rls> is the name of the file extracted in prerequisites.

    This step will replace the TOS CLI. Do not run any other TOS CLI commands until the TOS update command, which will be run in a subsequent step, has completed. If you attempt to run TOS CLI commands other than tos update, you will receive a message that there is a mismatch between the TOS CLI and the application and you will have the option of reverting to the CLI that was just replaced. If you revert to the previous CLI, you will be able to run other TOS CLI commands but will not be able to continue with the upgrade until you run the run file again.
  9. If upgrading from R22-2 PGA.0.0 or R22-1 PHF3.x and your syslog VIP is set up with transport TCP, import the syslog certificate.

    [<ADMIN> ~]# sudo tos certificate import --type syslog --ca=<CA-PATH> --cert=<CERT-PATH> --key=<KEY-PATH> --skip-cli-validation
    sudo tos certificate import --type syslog --ca=<CA-PATH> --cert=<CERT-PATH> --key=<KEY-PATH> --skip-cli-validation

    Make sure your TCP syslogs are sent over TLS.

  10. Upgrade TOS:

    [<ADMIN> ~]# tos update /opt/tos/tos.tar
    tos update /opt/tos/tos.tar

    When the command completes, you will again be able to run any TOS CLI command.

  11. Verify.

    Check again the tos version as described in upgrade procedure step 2 above. Make sure that the version displayed is the one to which you intended to update.

    [<ADMIN> ~]# tos version
    tos version

    Check again the cluster status. This time there is only one option - that for R21-3 and later - as described in upgrade procedure step 3 above.

    [<ADMIN> ~]# systemctl status k3s
    systemctl status k3s
  12. HA only. Copy the TOS CLI to all data nodes.

    On the primary data node, copy TOS from /usr/local/bin/ to the same location on the other data nodes.

    rsync -avhe ssh /usr/local/bin/tos <user>@<non-primary data node>:/usr/local/bin/tos --rsync-path="sudo rsync"
    rsync -avhe ssh /usr/local/bin/tos <user>@<non-primary data node>:/usr/local/bin/tos --rsync-path="sudo rsync"
  13. Where <user> is the user on the data node you are connecting with and <non-primary data node> is the IP address of the non-primary data node.

  14. Make a new backup.

    Before allowing users to start work, make a new one-time backup. This is necessary because the data schemas have been modified and any backups made before the upgrade can no longer be restored to the new version of the product. See Backup Procedure.

  15. If you monitor FortiManager devices, add a SAN signed certificate to each device.

  16. To enable automatic license usage tracking, make sure that all TOS users are able to access the domain aus.tufin.com from the browsers on their work stations. For more information, see Sending License Usage Reports Automatically.

  17. If your deployment incorporates remote clusters and you have upgraded from a release lower than R23-1, make sure port 9090 is open - see remote collector ports.

  18. Make sure users clear their browser cache.

  19. Reactive your license if necessary.

    In some cases, particularly when hardware is changed, license validity gets lost in the upgrade process. If activation is lost, this will not limit the functionality of TOS Aurora but future upgrades will not be possible until the license is reactivated.

    • Check the status - go to Admin > Licenses. The License window appears.

    • If the status shown in the window is anything other than Activated, follow the instructions in Activate License.
  20. If you are a PS (Tufin Professional Services) customer,

    1. Restart the PS web service

      [<ADMIN> ~]$ sudo systemctl start tufin-ps-web
      sudo systemctl start tufin-ps-web
    2. Enable the PS cron job

      Edit the cron file

      [<ADMIN> ~]# crontab -e
      crontab -e

      Delete the # character from the beginning of the PS-scripts line

      # 1 * * * * cat /opt/tufin/securitysuite/ps/PS-Scripts