Feedback

Chat Icon

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Deploying and Managing Services Using Rancher Manager - Part I
37%

Creating an Ingress Service

The RKE2 cluster comes with a built-in Ingress Controller that uses the Nginx Ingress Controller. This is activated by default, and you don't need to install anything:

# SSH into the control plane node
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP

# Check the Nginx Ingress Controller Pods
kubectl -n kube-system get pods | grep ingress-nginx-controller

You can configure the Ingress Controller to route traffic to a specific application when the user accesses a specific domain, subdomain, or a path. For example:

  • my.domain.com -> my-front-end-service
  • apiv2.my.domain.com -> my-apiv1-service
  • my.domain.com/apiv1 -> my-apiv2-service
  • and so on...

The smart routing mechanism of the Ingress Controller is internally implemented. Therefore, you can visit any of the IP addresses of the nodes in the cluster, and the Ingress Controller will route the traffic to the correct service. However, if we don't want to use the IP addresses of the nodes, we need to deploy an external networking component (not part of the cluster) that will route the traffic to the internal nodes where the Ingress Controller is installed. The latter will then route the traffic to the correct service.

We will use the rke2-extlb-01 node for this purpose. The network component we will deploy will:

  • Sit in the front of the backend servers (the Ingress Controller).
  • Accept clients' requests and forward them to the backend servers on behalf of the clients.
  • Hide the backend topology from the clients.

The most accurate term is to describe it is "Layer 7 Reverse Proxy Load Balancer". For simplicity, we will refer to it as an external load balancer.

This node is already part of the cluster; therefore, it should leave it to become an external load balancer. We will install Docker and run a simple Nginx container that will act as a load balancer:

SSH into the external load balancer node:

ssh root@$WORKLOAD_EXTLB_01_PUBLIC_IP

Leave the RKE2 cluster

rke2-killall.sh

Install Docker:

curl https://get.docker.com | sh
systemctl enable --now docker

Finally, create an Nginx configuration file:

mkdir -p $HOME/nginx && cat << EOF > $HOME/nginx/nginx.conf
events {
    worker_connections 1024;  # Maximum number of simultaneous connections per worker
}

http {
    upstream backend {
        # Define the nodes IPs that are running the Ingress Controller
        server $WORKLOAD_NODE_01_PRIVATE_IP:80;
        # More nodes can be added here if available

        # We can add the control plane node as well but in production,
        # we should leave handling traffic to the worker nodes
        # and use the control plane node for management purposes
        # server $WORKLOAD_CONTROLPLANE_01_PRIVATE_IP:80;
    }

    server {
        listen 80;
        listen [::]:80;

        # Listen on the public IP of the external load balancer
        server_name $WORKLOAD_EXTLB_01_PUBLIC_IP;

        location / {
            # Redirect traffic to the upstream backend (Ingress Controller)
            proxy_pass http://backend;
            proxy_set_header Host \$host;
            proxy_set_header X-Real-IP \$remote_addr;
            proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto \$scheme;
        }
    }
}
EOF

And run the Nginx container:

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Enroll now to unlock all content and receive all future updates for free.