Feedback

Chat Icon

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Longhorn: Requirements & Installation
56%

Requirements

Longhorn recommends a minimum of three nodes with the following hardware specifications:

  • 4 vCPUs per node
  • 4 GiB per node (worker node)
  • SSD/NVMe or similar performance block device on the node for storage (optional but recommended)
  • HDD/Spinning Disk or similar performance block device on the node for storage
  • 500/250 max IOPS per volume (1 MiB I/O)
  • 500/250 max throughput per volume (MiB/s)
  • A dedicated disk for storage

For the other best practices and requirements, the official documentation provides a detailed list of recommendations.

Our current infrastructure meets some essential best practices and requirements but not all. For example, we have only one worker node in our RKE2 cluster. We can still install Longhorn on the RKE2 cluster, but we will not have the best environment for testing and evaluating its features.This is why we are going to add two more worker nodes to the RKE2 cluster.

As a reminder, we already created a Terraform configuration to provision the cluster and copied it to the workspace server. We will use this as a starting point to add two more worker nodes to the RKE2 cluster. You can follow the steps below and run them from the machine where you initially created the Terraform configuration or the workspace server.

apt-get install jq zip unzip -y
TERRAFORM_VERSION="1.10.3"
TERRAFORM_ZIP="terraform_${TERRAFORM_VERSION}_linux_amd64.zip"
TERRAFORM_URL="https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_ZIP}"
curl -LO $TERRAFORM_URL
unzip $TERRAFORM_ZIP
mv terraform /usr/local/bin/
rm $TERRAFORM_ZIP

Export the project (folder) name, SSH key name, and the DigitalOcean variables required by Terraform:

PROJECT_NAME="learning-rancher"
SSH_UNIQUE_NAME="$HOME/.ssh/$PROJECT_NAME"
export DIGITALOCEAN_TOKEN=""
export DIGITALOCEAN_REGION="fra1"
export DIGITALOCEAN_IMAGE="ubuntu-24-04-x64"
export DIGITALOCEAN_SSH_KEY_NAME="$SSH_UNIQUE_NAME"
export DIGITALOCEAN_SSH_PUBLIC_KEY_PATH="$SSH_UNIQUE_NAME.pub"
export DIGITALOCEAN_SSH_PRIVATE_KEY_PATH="$SSH_UNIQUE_NAME"
export DIGITALOCEAN_VPC_UUID="[CHANGE_ME]"
export DIGITALOCEAN_PROJECT_NAME="$PROJECT_NAME"
export DIGITALOCEAN_WORKSPACE_VM_NAME="workspace"
export DIGITALOCEAN_WORKSPACE_VM_SIZE="s-4vcpu-8gb"
export DIGITALOCEAN_WORKLOAD_VMS_NAMES='["rke2-controlplane-01", "rke2-node-01", "rke2-node-02", "rke2-node-03"]'
export DIGITALOCEAN_WORKLOAD_VMS_SIZE="s-4vcpu-8gb"

Update the Terraform variables file:

cat << EOF > $PROJECT_NAME/variables.tf
variable "region" {
  default = "${DIGITALOCEAN_REGION}"
}
variable "image" {
  default = "${DIGITALOCEAN_IMAGE}"
}
variable "vpc_uuid" {
  default = "${DIGITALOCEAN_VPC_UUID}"
}
variable "workspace_vm_size" {
  default = "${DIGITALOCEAN_WORKSPACE_VM_SIZE}"
}
variable "workspace_vm_name" {
  default = "${DIGITALOCEAN_WORKSPACE_VM_NAME}"
}
variable "workload_vms_size" {
  default = "${DIGITALOCEAN_WORKLOAD_VMS_SIZE}"
}
variable "workload_vms_names" {
  default = ${DIGITALOCEAN_WORKLOAD_VMS_NAMES}
}
variable "project_name" {
  default = "${DIGITALOCEAN_PROJECT_NAME}"
}
variable "ssh_key_name" {
  default = "${DIGITALOCEAN_SSH_KEY_NAME}"
}
variable "ssh_public_key_path" {
  default = "${DIGITALOCEAN_SSH_PUBLIC_KEY_PATH}"
}
variable "ssh_private_key_path" {
  default = "${DIGITALOCEAN_SSH_PRIVATE_KEY_PATH}"
}
EOF

Run Terraform and extract the IP addresses of the new worker nodes:

terraform -chdir=$PROJECT_NAME init
terraform -chdir=$PROJECT_NAME apply -auto-approve

# Wait for the Terraform process to complete and IP addresses to be available
terraform -chdir=$PROJECT_NAME output -json all_vm_ips \
  > $PROJECT_NAME/all_vm_ips.json

JSON_FILE="$PROJECT_NAME/all_vm_ips.json"

export WORKSPACE_PUBLIC_IP=$(jq -r '.workspace.public_ip' $JSON_FILE)
export WORKSPACE_PRIVATE_IP=$(jq -r '.workspace.private_ip' $JSON_FILE)
export WORKLOAD_CONTROLPLANE_01_PUBLIC_IP=$(jq -r '."rke2-controlplane-01".public_ip' $JSON_FILE)
export WORKLOAD_CONTROLPLANE_01_PRIVATE_IP=$(jq -r '."rke2-controlplane-01".private_ip' $JSON_FILE)
export WORKLOAD_NODE_01_PUBLIC_IP=$(jq -r '."rke2-node-01".public_ip' $JSON_FILE)
export WORKLOAD_NODE_01_PRIVATE_IP=$(jq -r '."rke2-node-01".private_ip' $JSON_FILE)
export WORKLOAD_NODE_02_PUBLIC_IP=$(jq -r '."rke2-node-02".public_ip' $JSON_FILE)
export WORKLOAD_NODE_02_PRIVATE_IP=$(jq -r '."rke2-node-02".private_ip' $JSON_FILE)
export WORKLOAD_NODE_03_PUBLIC_IP=$(jq -r '."rke2-node-03".public_ip' $JSON_FILE)
export WORKLOAD_NODE_03_PRIVATE_IP=$(jq -r '."rke2-node-03".private_ip' $JSON_FILE)

Update the variables.sh file:

cat << EOF > $PROJECT_NAME/variables.sh && source $PROJECT_NAME/variables.sh
export WORKSPACE_PUBLIC_IP="$WORKSPACE_PUBLIC_IP"
export WORKSPACE_PRIVATE_IP="$WORKSPACE_PRIVATE_IP"
export WORKLOAD_CONTROLPLANE_01_PUBLIC_IP="$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP"
export WORKLOAD_CONTROLPLANE_01_PRIVATE_IP="$WORKLOAD_CONTROLPLANE_01_PRIVATE_IP"
export WORKLOAD_NODE_01_PUBLIC_IP="$WORKLOAD_NODE_01_PUBLIC_IP"
export WORKLOAD_NODE_01_PRIVATE_IP="$WORKLOAD_NODE_01_PRIVATE_IP"
export WORKLOAD_NODE_02_PUBLIC_IP="$WORKLOAD_NODE_02_PUBLIC_IP"
export WORKLOAD_NODE_02_PRIVATE_IP="$WORKLOAD_NODE_02_PRIVATE_IP"
export WORKLOAD_NODE_03_PUBLIC_IP="$WORKLOAD_NODE_03_PUBLIC_IP"
export WORKLOAD_NODE_03_PRIVATE_IP="$WORKLOAD_NODE_03_PRIVATE_IP"
EOF

Run the following script to copy the SSH keys and the variables.sh file to the new worker nodes. You can do this manually if you prefer:

# Define servers and common variables
SERVERS=(
    "$WORKSPACE_PUBLIC_IP"
    "$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP"
    "$WORKLOAD_NODE_01_PUBLIC_IP"
    "$WORKLOAD_NODE_02_PUBLIC_IP"
    "$WORKLOAD_NODE_03_PUBLIC_IP"
)

# Copy SSH keys and variables file, create project directory, and update bashrc
for SERVER in "${SERVERS[@]}"; do
  echo "Processing server: $SERVER"

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Enroll now to unlock all content and receive all future updates for free.