On This Page
Worker Nodes
Overview
When TOS is first installed on a server, the deployment becomes a cluster with a single node. Performance and functionality improvements can be achieved by adding additional worker nodes (servers) to the cluster. These additional nodes need to have the operating system installed but do not require installation of TOS, because the relevant services are installed when adding the node.
Worker nodes add computing power and may be needed to meet the demands of scaling up such as increasing the number of monitored devices or increasing frequency of revisions. When new worker nodes are added, some of the compute functions will be performed on them rather than on the primary data node, or other data nodes, leaving them more free to handle system management functions. There is almost no limit to the number of worker nodes that can be added but this must only be done under the guidance of Tufin support.
Before adding a worker node, consult with your account manager to make sure your requirements justify it. If your system doesn't require the additional processing power afforded by adding nodes, you will not experience any performance benefits.
You can remove worker nodes if their processing power is no longer required. After a node is removed, any compute functions it was handling will be handled by the remaining nodes.
Add Worker Node
Prerequisites
Procedure
-
Log in to the primary data node.
-
On the primary data node:
On completion, a new command string is displayed, which you will need to run on the new node within 30 minutes. If the allocated time expires, you will need to repeat the current step.
-
Copy the command string to the clipboard.
-
Log in to the new node.
- On the new node, paste the command string copied previously and run it. If the allocated time has expired, you will need to start from the beginning.
- Verify that the node was added by running sudo tos cluster node list on the primary data node.
-
On the primary data node, check the TOS status.
-
In the output, check if the System Status is Ok and all the items listed under Components appear as Ok. If this is not the case, contact Tufin Support.
Example output for a central cluster data node:
[<ADMIN> ~]$ tos status [Mar 28 13:42:09] INFO Checking cluster health status TOS Aurora Tos Version: 24.2 (PRC1.1.0) System Status: "Ok" Cluster Status: Status: "Ok" Mode: "Multi Node" Nodes Nodes: - ["node1"] Type: "Primary" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 19% - ["node3"] Type: "Worker Node" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 4% registry Expiration ETA: 819 days Status: "Ok" Infra Databases: - ["cassandra"] Status: "Ok" - ["kafka"] Status: "Ok" - ["mongodb"] Status: "Ok" - ["mongodb_sc"] Status: "Ok" - ["ongDb"] Status: "Ok" - ["postgres"] Status: "Ok" - ["postgres_sc"] Status: "Ok" Application Application Services Status OK Running services 50/50 Remote Clusters Number Of Remote Clusters: 2 - ["RC"] Connectivity Status:: "OK:" - ["RC2"] Connectivity Status:: "OK" Backup Storage: Location: "Local s3:http://minio.default.svc:9000/velerok8s/restic/default " Status: "Ok" Latest Backup: 2024-03-23 05:00:34 +0000 UTC
Example output for a remote cluster data node:
[<ADMIN> ~]$ tos status [Mar 28 13:42:09] INFO Checking cluster health status TOS Aurora Tos Version: 24.2 (PRC1.0.0) System Status: "Ok" Cluster Status: Status: "Ok" Mode: "Single Node" Nodes Nodes: - ["node2"] Type: "Primary" Status: "Ok" Disk usage: - ["/opt"] Status: "Ok" Usage: 19% registry Expiration ETA: 819 days Status: "Ok" Infra Databases: - ["mongodb"] Status: "Ok" - ["postgres"] Status: "Ok" Application Application Services Status OK Running services 16/16 Backup Storage: Location: "Local s3:http://minio.default.svc:9000/velerok8s/restic/default " Status: "Ok" Latest Backup: 2024-03-23 05:00:34 +0000 UTC
After the node is added, we recommend stopping tos and then starting it to enhance the node's performance. This will require downtime.
Remove Worker Node
- Identify the worker node you want to remove.
-
-
If the node is in a healthy state:
-
On the primary data node, run:
Parameters
Parameter
Description
Required/Optional
<node>
Hostname of the node to remove.
Required
On completion, a new command string appears, which you will need to run on the node you want to remove within 30 minutes. If the allocated time expires, you will need to repeat the current step.
-
Log in to the CLI of the node to be removed.
- On the node to be removed, run the command string displayed on completion of the command above. On completion, all TOS-related directories and data will be deleted from the node, therefore make sure you run it on the correct node. Running the command on the wrong node will destroy the cluster.
All TOS-related directories will be deleted from the node.
-
-
If the node you want to remove is not in a healthy state:
-
On the primary data node, run:
[<ADMIN> ~]$ sudo tos cluster node remove <node> --force
sudo tos cluster node remove <node> --forceParameters
Parameter
Description
Required/Optional
<node>
Hostname address of the node to remove.
Required
TOS directories will not be deleted from the node
-
If the machine is still serviceable, you can delete the TOS directories manually:
-
-
- Verify that the node has been removed by again running sudo tos cluster node list.
Was this helpful?
Thank you!
We’d love your feedback
We really appreciate your feedback
Send this page to a colleague