Feedback

Chat Icon

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Longhorn: Understanding How It Works with Practical Examples
63%

ReadWriteMany Volumes and NFSv4

Back to our todo-app. Currently, the application is using a single replica and a single volume. If we want to scale it to multiple replicas, let's say 3, the actual volume will be mounted to only one node at a time. Our volume is using the ReadWriteOnce access mode, which means it can only be mounted to one node at a time. When scaling to 3, 2 of the replicas will be in a pending state because they can't access the volume.

ℹ️ By default, the ReadWriteOnce (RWO) access mode restricts a volume to a single node, which limits scalability in distributed systems.

[...]

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: todo-app-data
  namespace: todo-app-namespace
spec:
  accessModes:
    # This is the access mode
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: longhorn

[...]

To scale our application to multiple replicas while sharing a common storage backend (SQLite in our case), we need a volume access mode that allows simultaneous access across multiple nodes. For applications that require shared access, we must transition to the ReadWriteMany (RWX) access mode.

Longhorn makes this transition easy by providing integrated support for RWX volumes through NFSv4 servers hosted within Kubernetes. Unlike traditional Kubernetes setups where you must manage an external NFS server manually, Longhorn simplifies the process by automatically provisioning and managing these NFS servers within the cluster.

When you use Longhorn to create an RWX volume, it dynamically provisions a share-manager pod in the longhorn-system namespace. This pod serves as an internal NFSv4 server, exposing the volume to multiple pods simultaneously. Each RWX volume also has a corresponding Kubernetes Service, which acts as an endpoint for the NFSv4 clients to connect.

# SSH into the workspace server
ssh root@$WORKSPACE_PUBLIC_IP

# Update the application manifest to use the ReadWriteMany access mode
sed -i 's/ReadWriteOnce/ReadWriteMany/' \
  $HOME/todo/app/kube/todo-app-manifests.yaml

# Set the replica count to 3
sed -i 's/replicas:.*/replicas: 3/' \
  $HOME/todo/app/kube/todo-app-manifests.yaml

# Git commit and push the changes
cd $HOME/todo/app
git add .
git commit -m "Use ReadWriteMany access mode"
git push origin main

Note that Fleet may not apply the changes to the RKE2 successfully due to the immutable nature of the PVC. To resolve this, delete the PVC, any pod that holds it from being deleted, delete the namespace if needed, and delete any finalizers that may be blocking the deletion. The commands to execute look like this:

# SSH into the RKE2 CP
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP
# Delete the PVC
kubectl delete pvc todo-app-data -n todo-app-namespace

# If the pvc doesn't get deleted,
# you may need to delete the pod(s) that holds it.
# Run the following command to get the pod name:
# `kubectl describe pvc todo-app-data -n todo-app-namespace`
# Copy the name of the pod using the PVC and run the following command
kubectl delete pod [POD_NAME] -n todo-app-namespace --force --grace-period=0

# Copy the name of the PVC and run the following command
kubectl patch pvc [PVC_NAME] -n todo-app-namespace -p '{"metadata":{"finalizers":null}}'

After the changes are applied, resync the Fleet GitRepo if needed.

Once the application is redeployed, you can check the status of the NFS ClusterIP service and the share-manager pod in the Longhorn UI or using kubectl:

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Enroll now to unlock all content and receive all future updates for free.