Adding a Worker Node - AWS

Overview

This procedure is for adding a worker node to an existing TOS cluster on the AWS platform, and should be performed after:

  1. Prepare an AWS Instance

  2. Install TOS

You do not need to install TOS on the worker nodes.

Add the Worker Node to The Cluster

  1. Log in to the primary data node.

  2. On the primary data node:

    [<ADMIN> ~]$ sudo tos cluster node add --role=worker
    sudo tos cluster node add --role=worker

    On completion, a new command string is displayed, which you will need to run on the new node within 30 minutes. If the allocated time expires, you will need to repeat the current step.

  3. Copy the command string to the clipboard.

  4. Log in to the new node.

  5. On the new node, paste the command string copied previously and run it. If the allocated time has expired, you will need to start from the beginning.
  6. Verify that the node was added by running sudo tos cluster node list on the primary data node.

Check TOS Status

  1. On the primary data node, check the TOS status.

    [<ADMIN> ~]$ sudo tos status
    sudo tos status
  2. In the output, check if the System Status is Ok and all the items listed under Components appear as Ok. If this is not the case, contact Tufin Support.

  3. Example output for a central cluster data node:

    [<ADMIN> ~]$ tos status         
    [Mar 28 13:42:09]  INFO Checking cluster health status           
    TOS Aurora
    Tos Version: 24.2 (PRC1.1.0)
    
    System Status: "Ok"
                
    Cluster Status:
       Status: "Ok"
       Mode: "Multi Node"
    
    Nodes
      Nodes:
      - ["node1"]
        Type: "Primary"
        Status: "Ok"
        Disk usage:
        - ["/opt"]
          Status: "Ok"
          Usage: 19%
      - ["node3"]
        Type: "Worker Node"
        Status: "Ok"
        Disk usage:
        - ["/opt"]
          Status: "Ok"
          Usage: 4%
    
    registry
      Expiration ETA: 819 days
      Status: "Ok"
    
    Infra
    Databases:
    - ["cassandra"]
      Status: "Ok"
    - ["kafka"]
      Status: "Ok"
    - ["mongodb"]
      Status: "Ok"
    - ["mongodb_sc"]
      Status: "Ok"
    - ["ongDb"]
      Status: "Ok"
    - ["postgres"]
      Status: "Ok"
    - ["postgres_sc"]
      Status: "Ok"
    
    Application
    Application Services Status OK
    Running services 50/50
    
    Remote Clusters
    Number Of Remote Clusters: 2
      - ["RC"]
         Connectivity Status:: "OK:"
      - ["RC2"]
         Connectivity Status:: "OK"
    
      Backup Storage:
      Location: "Local
    s3:http://minio.default.svc:9000/velerok8s/restic/default "
      Status: "Ok"
      Latest Backup: 2024-03-23 05:00:34 +0000 UTC			

    Example output for a remote cluster data node:

    [<ADMIN> ~]$ tos status         
    [Mar 28 13:42:09]  INFO Checking cluster health status           
    TOS Aurora
    Tos Version: 24.2 (PRC1.0.0)
    
    System Status: "Ok"
                
    Cluster Status:
       Status: "Ok"
       Mode: "Single Node"
    
    Nodes
      Nodes:
      - ["node2"]
        Type: "Primary"
        Status: "Ok"
        Disk usage:
        - ["/opt"]
          Status: "Ok"
          Usage: 19%
      
    registry
      Expiration ETA: 819 days
      Status: "Ok"
    
    Infra
    Databases:
    - ["mongodb"]
      Status: "Ok"
    - ["postgres"]
      Status: "Ok"
    
    Application
    Application Services Status OK
    Running services 16/16
    
      Backup Storage:
      Location: "Local
    s3:http://minio.default.svc:9000/velerok8s/restic/default "
      Status: "Ok"
      Latest Backup: 2024-03-23 05:00:34 +0000 UTC			

After the node is added, we recommend stopping tos and then starting it to enhance the node's performance. This will require downtime.