On This Page
Move etcd - New AWS Instance
The etcd database should be on a separate volume to improve the stability of TOS Aurora and reduce latency. Moving the etcd database to a separate volume ensures that the kubernetes database has access to all the resources required to ensure an optimal TOS performance.
This procedure is only required for data nodes running: Rocky Linux 8 or RHEL 8.
This procedure must be performed by an experienced Linux administrator with knowledge of network configuration.
Preliminary Preparations
-
Switch to the root user.
-
Install the rsync RPM.
Mount The etcd Database to A separate Volume
-
Add a volume to the Instance.
-
In the AWS instance navigation pane, go to Volumes pane, and click Create Volume.
-
Configure the following settings:
-
Volume type: SSD gp3
-
Size: Allocate a disk size of at least 128 GB
-
IOPS: 7500
-
Availability Zone: Same availability zone as instance
-
Throughput: 250 MBps
-
Snapshot ID: Keep the default value
-
Encryption: If the volume is encrypted, it can only be attached to an instance type that supports Amazon EBS encryption.
-
-
Click Create Volume.
-
In the navigation pane, go to Volumes pane, and click Actions > Attach Volume.
-
In Instance, enter the ID of the instance or select the instance from the list of options.
-
In Device name, select an available device name from the Recommended for data volumes section of the list.
-
Connect to the instance and proceed to the next step.
The device name is used by Amazon EC2. The block device driver for the instance might assign a different device name when mounting the volume.
-
-
Mount the new volume.
-
Log into the data node as the root user.
-
Verify that the new volume s recognized by the the OS.
Compare the output with the name of the volume you created in the previous step, and verify that the index number in the volume name is the latest number. In most cases this will be 1. For example, if this is the only volume the output will be nvme1n1. If an older volume was previously created, the index number will be nvme2n1.
-
Create a variable with the block device path of the new volume.
where <> represents the index number of the new volume.
-
Generate a UUID for the block device of the new volume.
-
Create a primary partition on the new volume.
-
Verify that the partition was created.
-
Format the partition as ext4.
-
Verify that the partition has been formatted with the UUID and the etcd label (output should return the partition with UUID and an ETCD label).
-
Create the mount point of the etcd database.
-
Set the partition to mount upon operating system startup.
-
Load the changes to the filesystem.
-
Mount the partition that was added to /etc/fstab.
If the output is not empty, stop the procedure. The etcd volume cannot be mounted. Review what was missed in the previous steps.
-
Verify the partition has been mounted (the output should return the block device and mount point).
[<ADMIN> ~]# mount | grep "/var/lib/rancher/k3s/server/db"
mount | grep "/var/lib/rancher/k3s/server/db"If the output is empty, stop the procedure. The etcd volume is not mounted. Review what was missed in the previous steps.
-
-
Check the cluster status.
-
On the primary data nodes, check the TOS status.
-
In the output, check if the System Status is Ok and all the items listed under Components appear as ok. If this is not the case, contact Tufin Support.
Example output:
[<ADMIN> ~]$ sudo tos status Tufin Orchestration Suite 2.0 System Status: Ok System Mode: Multi Node Nodes: 1 Master, 1 Worker. Total 2 nodes. Nodes are healthy. Components: Node: Ok Cassandra: Ok Mongodb: Ok Mongodb_sc: Ok Nats: Ok Neo4j: Ok Postgres: Ok Postgres_sc: Ok
-