Remote Collectors

Overview

A TOS Aurora remote collector (RC) cluster is a limited deployment of TOS Aurora that goes between the central cluster (i.e. the initial or main cluster) and selected devices, receiving and relaying data from one to the other. The remote collector receives data from the central cluster, which it relays to devices, and receives data from devices, which it relays back to the central cluster. Remote collectors can therefore be considered as extensions of a central cluster; they do not have their own UI and their data is viewed through the UI of the central cluster. Users can monitor remote collectors using TOS Monitoring. Once set up, a remote collector can interact with devices even when disconnected from a central cluster, however it must be connected to a central cluster to utilize its full functionality.

TOS Aurora can be set up to run as a distributed architecture using remote collectors (RC's), which can be deployed both on-prem and in supported cloud platforms Azure, AWS ,and GCP. The requirements and installation procedure are the same as a regular TOS installation. For more information, see:

The terms remote collector, RC, remote collector server, remote collector cluster will be used interchangeably, except where differentiation is required.

A remote collector cluster is generally deployed for one or more or the following reasons:

  1. Some or all monitored devices are located in a distant geographical location from the central cluster, something that can cause connectivity issues when the devices are connected directly to the central cluster
  2. When the main cluster network segment is prevented from accessing some or all monitored devices for security reasons
  3. To distribute network traffic load, improving performance when many devices are being monitored

  4. To separate the data collection functionality from the main processing

Deployment Constraints

A central cluster can have any number of remote collector clusters connected to it but a remote collector cluster can be connected to only one central cluster. The remote collector clusters can be located on the same network as the central cluster or on a remote site and will always be deployed as a separate Kubernetes cluster.

Delays in RC usage data displayed in the rule viewer can occur due to:

  • Different time zones between the RC and its central cluster and

  • A difference between the usage collection time in the RC and update of usage data from the RC to the central cluster.

The remote collector clusters need to be running the same TOS version as the central cluster.

Adding Nodes to the Remote Collector Cluster

Worker nodes can be added to remote collector clusters, just like the central cluster.

High Availability

Remote collectors cannot be run under high availability, however they can be connected to a central cluster that is running under high availability.

Functions Performed by the Remote Collector

  • Revisions
    • Fetch revisions from devices
    • Relay saved revisions to the central
  • Rule/Object hits:
    • Collect rule/object hit counts from devices
    • Relay hit counts to central
  • APG (Automatic Policy Generator)
    • Perform APG tasks, based on hits from monitored devices
    • Relay results to the central
  • Topology:
    • Collect topology (as part of revision and also dynamic topology)
    • Relay topology to the central
  • Provisioning: Perform provisioning (change of policy) on devices
  • Commit: Commit policies from management devices to firewalls
  • Status:
    • Report status of monitored devices to Central
    • Report status of itself to Central
  • Handling of events from monitored devices such as syslog and LEA

Connectivity

The remote collector communicates with the central cluster through three different channels.

  • A TCP channel for message exchange over JMS via the central cluster's port TCP:61617

  • An HTTPS channel for invoking CGI calls from the central cluster to the remote collector's port HTTPS <TCP 8443>

  • An HTTPS channel for invoking SecureTrack APIs (api-v1) from the remote collector to the central cluster's port HTTPS <TCP 8443>

For more information, see Remote Collector Ports.

Backup / Restore

Remote collectors can be backed up and restored in exactly the same way as central clusters. However a backup taken from a remote collector can only be restored to a remote collector and a remote collector can only be restored from a backup taken from a remote collector.

Tufin auto-generated certificates will be automatically re-created on the first connection following a restore and central and remote collector clusters will need to be reconnected manually as explained in Connect a Remote Collector.

TOS Aurora Server Status

The remote collectors connected to a central cluster are displayed on the status screen, example below.

Prerequisites

  • TOS Aurora must be installed on the remote collector cluster using one of the standard installation procedures, noting that the parameter --modules="RC" must be specified on the tos install command.
  • Hardware and network requirements can be found in the appropriate installation procedure used for your remote collector.
  • All the same prerequisites specified in the installation procedure you use, apply to remote collectors as well, unless mentioned otherwise.
  • Using remote collectors mandates additional port requirements, see Device-Related Ports.

Connect a Remote Collector

Deploying a remote collector involves first installing TOS Aurora on the remote collector and then connecting it to an existing main (or central) cluster.

In rare cases, connecting or disconnecting one RC can cause additional RCs to disconnect. Check that no other RCs have been disconnected by going on the SecureTrack status page. If you see that an RC has become disconnected, wait a few minutes. If it remains disconnected, connect it manually.

  1. Make sure all prerequisites have been met.
  2. On the SecureTrack status page, make a note of all remote collectors that are currently running.
  3. Generate a one-time password.

    On the central cluster:

    [<ADMIN> ~]$ sudo tos cluster generate-otp
    sudo tos cluster generate-otp

    A password will be displayed, which will remain valid for five minutes e.g.

    [tufin-admin@TufinOS ~]$ sudo tos cluster generate-otp
    ba35e90b-cea2-4701-ae6d-4c485dc7a3e30
    [tufin-admin@TufinOS ~]$
  4. Connect the remote collector to the central cluster.

    On the remote collector, run the TOS cluster connect command:

  5. [<ADMIN> ~]$ sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP
    sudo tos cluster connect --central-cluster-vip=CENTRAL-CLUSTER-VIP --remote-cluster-vip=REMOTE-CLUSTER-VIP --remote-cluster-name=REMOTE-CLUSTER-NAME --initial-secret=OTP

    where

    Parameter

    Description

    Required/Optional

    --central-cluster-vip

    External IP address (VIP) of your central server cluster.

    Required

    --remote-cluster-vip

    External IP address (VIP) of the server you want to connect (i.e. the current server).

    Required

    --remote-cluster-name

    Any alphanumeric string you choose; quotes are not used so you cannot embed spaces.

    Required

    --initial-secret

    One-time password returned from running tos cluster generate-otp on the central server.

    Required

  6. Reconnect any remote collectors that may have been disconnected in the process.
    1. Go to the SecureTrack status page.
    2. Verify that the remote collector you added is running
    3. Make a note of any remote collectors that were disconnected in the procedure.
    4. If they do not reconnect automatically after a minute, repeat steps 3-4 to manually reconnect .
    5. Repeat until all remote collectors are running.

List Remote Collectors Connected to a Central Cluster

You can check which remote collectors, if any, are connected to a given central cluster.

  1. Run the command on the central cluster:

    [<ADMIN> ~]$ sudo tos cluster list
    sudo tos cluster list

    All connected clusters will be listed. Example:

    [Central cluster]# sudo tos cluster list
    [Nov 22 12:15:20]  INFO  Cluster name: Central, id: 1, ip: 10.244.0.71, type: master
    [Nov 22 12:15:20]  INFO  Cluster name: RC208, id: 11, ip: 10.100.65.200, type: data_collector
    [Nov 22 12:15:20]  INFO  Cluster name: RC136, id: 12, ip: 10.100.67.122, type: data_collector

Disconnect a Remote Collector

Cancel the connection between a remote collector and its central cluster.

In rare cases, connecting or disconnecting one RC can cause additional RCs to disconnect. Check that no other RCs have been disconnected by going on the SecureTrack status page. If you see that an RC has become disconnected, wait a few minutes. If it remains disconnected, connect it manually.

  1. On the central cluster, run the following command:

    [<ADMIN> ~]$ sudo tos cluster disconnect-rc-cluster --cluster-id=REMOTE-CLUSTER-ID
    sudo tos cluster disconnect-rc-cluster --cluster-id=

    where

    Parameter

    Description

    Required/Optional

    <ID>

    ID of the remote collector cluster displayed when running command sudo tos cluster list.

    Required

The Central cluster automatically disconnects from the Remote Collector and removes the unneeded components from it. If the two clusters have already been physically disconnected, run the same command on the remote cluster, using the same value for REMOTE-CLUSTER-ID.

To confirm that the RC is disconnected from central, check the appropriate components:

[<ADMIN> ~]# kubectl get pods | grep proxy
kubectl get pods | grep proxy

View the output and check the status of the following pods. If they all have a status of Terminated, the RC has been disconnected successfully.

  • http-client-proxy
  • http-server-proxy
  • tcp-client-proxy

Certificate Expiry

Both the central and remote collector clusters have self-generated SSL certificates that allow the HTTPS connectivity to take place between them and they are automatically renewed according to parameters that you can set. In the event that neither automatic nor manual renewal have taken place, connectivity will not be allowed until a new valid certificate is created. The certificate renewal status can be seen on the status screen.

To set the renewal parameters see:

If, for some reason, your certificates are not renewed automatically, they can be recreated manually by running the command below on the central cluster primary data node. This will initiate recreation of the certificates on the central and all connected remote clusters. Note that this command pulls the VIP information from prior configuration steps.

[<ADMIN> ~]$ sudo tos cluster rotate-certificate 
sudo tos cluster rotate-certificate

Example Output

[tufin-admin@TufinOS ~]$ sudo tos cluster rotate-certificate 
[Mar  2 16:42:37]  INFO Successfully rotate certificate for central cluster for IP: "10.100.200.27"
[tufin-admin@TufinOS ~]$