On This Page
Multi-Node Cluster
Overview
When TOS is first installed on a server, the deployment becomes a cluster with a single node. Performance and functionality improvements can be achieved by adding additional nodes (servers) to the cluster. These additional nodes need to have the operating system installed but do not require installation of TOS, because the relevant services are installed when adding the node.
There are two types of node that can be added to your cluster - worker nodes that perform compute operations and data nodes that perform data storage and handling operations - see TOS Aurora Architecture.
List the nodes in your cluster by running the command sudo tos cluster node list.
If you want to set up high availability, see High Availability.
Worker Nodes
Worker nodes add computing power and may be needed to meet the demands of scaling up such as increasing the number of monitored devices or increasing frequency of revisions. When new worker nodes are added, some of the compute functions will be performed on them rather than on the primary data node, or other data nodes, leaving them more free to handle system management functions. There is almost no limit to the number of worker nodes that can be added but this must only be done under the guidance of Tufin support.
Before adding a worker node, consult with your account manager to make sure your requirements justify it. If your system doesn't require the additional processing power afforded by adding nodes, you will not experience any performance benefits.
You can remove worker nodes if their processing power is no longer required. After a node is removed, any compute functions it was handling will be handled by the remaining nodes.
Data Nodes
Data nodes add load balancing and allow you to run in HA (high availability) mode. If you are not going implement HA, remain with the one primary data node only. If you are going to implement HA, add exactly two additional data nodes - no more, no less. If you do not have two additional machines available, then do not add just the one data node. In the current release, high availability is not available for remote collectors and so there is nothing to gain from adding data nodes to this kind of cluster. If you want to set up high availability, see High Availability.
Prerequisites for the New Node
-
Prepare a server or VM on the desired platform/operating system, making sure you have the required hardware resources.
Worker or Data Node
Worker Nodes Only
-
A unique hostname in the cluster - use the command below, replacing <mynode> with your preferred name:
Servers must have their host name changed to the unique value before you add them as nodes to the cluster. Once the node has been added, changing the host name or IP address will require removing the node and adding again after the change.
- The new node must be set to the same time as the primary data node - this can be achieved by all servers being synchronized via ntpd or chrony
Add a node to the cluster
-
On the primary data node run:
where
<TYPE>
is worker or data, depending on the type of node you want to add.On completion, a new command string is displayed, which you will need to run on the new node within one hour. If the allocated time expires, you will need to repeat the current step.
-
Log in to the CLI of the server to be added as a new node in the cluster.
- On the new node, run the command string displayed previously on the primary data node in step 2 above. If the allocated time has expired, you will need to start from the beginning.
- Verify that the node was added by running sudo tos cluster node list on the primary data node.
Remove a node from the cluster
-
Identify the node you want to remove and its status by running sudo tos cluster node list on the primary data node.
-
-
If the node is in a healthy state:
-
On the primary data node, run:
Parameters
Parameter
Description
Required/Optional
<node>
Hostname address of the node to remove.
Required
On completion, a new command string is displayed, which you will need to run on the node you want to remove within 30 minutes. If the allocated time expires, you will need to repeat the current step.
-
Log in to the CLI of the node to be removed.
- On the node to be removed, run the command string displayed on completion of the command above. On completion, all TOS-related directories and data will be deleted from the node, therefore make sure you run it on the correct node. Running the command on the wrong node will destroy the cluster.
All TOS-related directories will be deleted from the node.
-
-
If the node you want to remove is not in a healthy state:
-
On the primary data node, run:
[<ADMIN> ~]$ sudo tos cluster node remove <node> --force
sudo tos cluster node remove <node> --forceParameters
Parameter
Description
Required/Optional
<node>
Hostname address of the node to remove.
Required
TOS directories will not be deleted from the node
-
If the machine is still serviceable, you can delete the TOS directories manually:
-
-
- Verify that the node has been removed by again running sudo tos cluster node list.