Feedback

Chat Icon

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Microservices Deployment Strategies: Blue/Green, Canary, and Rolling Updates
68%

Rolling Updates: Changing the Tyres While Driving

This strategy is native to Kubernetes, and no additional tools are required to implement it. This is its first advantage. The second advantage lies in its way of operation: a progressive replacement of instances without downtime (if Kubernetes mechanisms like health checks are properly configured).

Here are the steps involved in a typical rolling update process:

  1. Create a new version of your application by updating the image or configuration in the Deployment manifest.

  2. Apply the change using kubectl apply -f deployment.yaml.

  3. Kubernetes starts a gradual rollout — it launches a few new Pods (with the updated version) while keeping the old ones running.

  4. Health checks (readiness and liveness probes) make sure the new Pods are healthy before sending traffic to them.

  5. Once the new Pods are ready, Kubernetes terminates old Pods one by one.

  6. This process continues until all Pods run the new version.

  7. If something goes wrong, you can roll back easily with kubectl rollout undo deployment or by applying the previous manifest.

At a certain level, you can control the speed of the rollout by adjusting parameters like maxUnavailable and maxSurge in the Deployment strategy. More about this is explained later.

Rolling update

Rolling update

Rolling Updates: Example

To demonstrate rolling updates in practice, we will use the hello-app application again. Here are the steps to follow:

  • We will deploy the first version of the application.
  • We will configure the rolling update strategy.
  • We will release a new version of the application and observe the rolling update process.

Let's start by creating an Ingress controller using the NGINX Ingress Controller Helm chart (if it's not already installed):

helm repo add ingress-nginx \
  https://kubernetes.github.io/ingress-nginx

helm repo update

helm upgrade --install \
  nginx-ingress ingress-nginx/ingress-nginx

We need the external IP address of the Ingress controller; wait for it to be assigned by running the following script:

while true; do
    export INGRESS_IP=$(
        kubectl get svc nginx-ingress-ingress-nginx-controller \
        -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    )

    if [ "$INGRESS_IP" == "" ] || [ -z "$INGRESS_IP" ]; then
        echo "IP address is still pending. Waiting..."
        sleep 10
    else
        echo "Ingress IP is set to $INGRESS_IP"
        break
    fi
done

This is the manifest we would use to deploy the first version of the application, the Kubernetes Service, and the Ingress resource:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-app
  template:
    metadata:
      labels:
        app: hello-app
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: Always
        name: hello-app
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
    name: hello-app
spec:
    selector:
        app: hello-app
    ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-app
spec:
  rules:
  - host: app.${INGRESS_IP}.nip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-app
            port:
              number: 80
  ingressClassName: nginx

However, we need to deploy the new version (gcr.io/google-samples/hello-app:2.0) and follow the rolling update process. In this case, the following block in the Deployment manifest should be added to configure the rolling update strategy:

The RollingUpdate strategy lets Kubernetes replace Pods gradually, defining how many Pods can be created or deleted during the update.

strategy:
  type: RollingUpdate
  rollingUpdate:
    maxSurge: 1
    maxUnavailable: 0
  • maxSurge: 1 means Kubernetes can create one extra Pod (the new version) during the rollout. This allows the new Pod to start and become ready before any old Pod is terminated.

  • If you set maxSurge: 2, Kubernetes could start two new Pods at a time, which can speed up the rollout but may temporarily increase resource usage.

  • maxUnavailable: 0 ensures zero downtime. No Pods are taken down until their replacements are healthy and ready.

These configurations instruct Kubernetes to:

  • Create 1 new v2 Pod and wait for it to be ready.
  • Once it is ready, terminate 1 old v1 Pod.

It’s like changing the tires of a moving car one at a time without stopping it.

Instead of absolute numbers, you can also use percentages. For example, maxSurge: 50% means that a new version of the application can be deployed by gradually replacing instances of the old version with instances of the new version, deploying 50% of the new instances at the same time.

strategy:
type: RollingUpdate
rollingUpdate:
    maxSurge: 1
    maxUnavailable: 50%

To indicate that a Pod is ready, users are strongly advised to configure a solid readiness probe. Kubernetes knows nothing about the internal state of your app, and that's why it's necessary to provide hints.

Example:

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Enroll now to unlock all content and receive all future updates for free.