Join us
@ninaddesai ă» Jul 24,2022 ă» 7 min read ă» 1210 views ă» Originally posted on ninad-desai.medium.com
In this post, we discuss how to perform canary deployment which is a most widely used form of progressive delivery, using Argo Rollouts in depth
In Part 1 of Argo Rollouts, we have seen what Progressive Delivery is and how you can achieve the Blue-Green deployment type using Argo Rollouts. We also deployed a sample app in a Kubernetes cluster using it. Do read the first part of this Progressive Delivery blog series, if you havenât yet.
In this hands-on article, we will explore what is the canary deployment strategy and how you can achieve the same using Argo Rollouts. But before that, letâs first understand what is canary deployment and the need behind it.
What is Canary Deployment?
As stated rightly by Danilo Sato in this CanaryRelease article,
âCanary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.â
Canary is one of the most popular and widely adopted techniques of progressive delivery. Do you know why we call it canary and not anything else? The term âcanary deploymentâ comes from an old coal mining technique. These mines often contained carbon monoxide and other dangerous gases that could kill the miners. Canary birds were more sensitive to airborne toxins than humans, so miners would use them as early detectors, So Similar approach is used in canary deployment, where instead of putting entire end-users in danger like in old big-bang deployment, we instead start releasing our new version of the application to a very small percentage of users and then try to do analysis and see if all working as expected and then gradually release it to a larger audience in an incremental way.
Need for Canary Deployment
Some of us have already seen that sometimes new updates of apps (like WhatsApp or Facebook) is visible to one of our friends but not to everyone, and thatâs the power of canary deployment strategy handling new version rollout in the background. The problems that canary deployment tries to solve are:
How does Argo Rollouts handle the Canary Deployment?
Letâs say, once you start using the Argo Rollouts controller for canary style deployment, it basically creates a new ReplicaSet of the new version of the application (which creates a new set of pod) and divides the traffic between the old stable and this new canary version by using the single service object that it was using to route traffic to the older stable version.
Now, letâs try it on our own with some hands-on to see how it works in real.
Lab/Hands-on of Argo Rollouts with Canary Deployment
If you do not have the K8s cluster readily available to do further lab then we recommend going for the CloudYuga platform-based version of this blog post. Else, you can set up your own kind local cluster with Nginx controller also deployed, and follow along to execute the below commands against your kind cluster.
Clone the Argo Rollouts example GitHub repo or preferably, please fork this
Installation of Argo Rollouts controller
Create the namespace for installation of the Argo Rollouts controller and Install the Argo Rollouts through the below command, more about the installation can be found in the first part of the progressive delivery blog series.
You will see that the controller and other components have been deployed. Wait for the pods to be in the Running state.
Install Argo Rollouts Kubectl plugin with curl for easy interaction with Rollout controller and resources.
Argo Rollouts comes with its own GUI as well that you can access with the below command.
Now you can access the Argo Rollouts console, by accessing http://localhost:3100
your browser. You would be presented with UI as shown below (currently it won't show you anything since we are yet to deploy any Argo Rollouts based).
Figure 1: Argo Rollouts Dashboard
Now, letâs go ahead and deploy the sample app using the canary deployment strategy.
Canary Deployment with Argo Rollouts
To experience how the canary deployment works with Argo Rollouts, we will deploy the sample app which contains Rollouts with canary strategy, Service, and Ingress as Kubernetes objects.
rollout.yaml
content:
Here setWeight
field dictates the percentage of traffic that should be sent to the canary, and the pause
struct instructs the rollout to pause. When the controller reaches a pause step for a rollout, it will set add a PauseCondition struct to the .status.PauseConditions
field. If the duration field within the pause struct is set, the rollout will not progress to the next step until it has waited for the value of the duration field. Otherwise, the rollout will wait indefinitely until that pause
condition is removed. By using the setWeight
and the pause
fields, a user can declaratively describe how they want to progress to the new version. You can find more details about all the different parameters available.
Now, we will create the service object for this rollout object.service.yaml
content:
Letâs now create an ingress object.ingress.yaml
content:
To keep things simple, letâs create all these objects for now in the default
namespace by executing the below commands.
You would be able to see all the objects created in the default namespace by running the below command.
Now, you can access your sample app, by accessing this http://localhost:80
on your browser. You would be able to see the app as shown below.
Figure 2: Sample app with blue-version
If you visit the Argo Rollouts console by again accessing http://localhost:3100 on your browser then this time, you could see the sample deployed on the Argo Rollouts console as below.
Figure 3: Canary Deployment on Argo Rollouts Dashboard
Click on this rollout-demo in the console and it will present you with its current status of it as below.
Figure 4: Details of Canary Deployment on Argo Rollouts Dashboard
Again, either you can use this GUI or else use the command shown below to continue with this demo. You can see the current status of this rollout by running the below command as well.
Now, letâs deploy the Yellow version of the app using the canary strategy via the command line.
You would be able to see a new, i.e yellow version-based pod of our sample app, coming up.
Currently, only 20% i.e 1 out of 5 pods with a yellow version will come online, and then it will be paused as we have mentioned in the steps above. See line number 9 in the rollout.yaml
On the Argo console, you would be able to see the new revision of the app with the changed image version running.
Figure 5: Another version of the sample app in Canary Deployment on Argo Rollouts Dashboard
If you visit the http://localhost:80
on your browser, you would still see only the majority of blue versions, and a very less number of yellow is visible as we have not yet fully promoted the yellow version of our app.
Figure 6: blue-yellow versions of the sample app
You can confirm the same now, by running the command below, which shows, that the new version is in paused state.
Letâs promote the yellow version of our app, by executing the below command.
Run the following command and you would see itâs scaling the new, i.e yellow version of our app completely.
The same can be confirmed by running the below command, which shows the old set of pods i.e old blue version of our app, terminating or already terminated.
Eventually, if you visit the app URL on http://localhost:80
on your browser, you would see only the Yellow version is visible right now because we have fully promoted the yellow version of our app.
Figure 7: Sample app with yellow-version
Kudos!! you have successfully completed the canary deployment using Argo Rollouts. You can also delete this entire setup i.e our sample deployed app using the below command.
Summary
In this post, we experienced how we can achieve canary deployment style of progressive delivery using Argo Rollouts quite easily. Achieving canary deployment in this way with Argo Rollouts is simple and does not require any service mesh and provides much better control on rolling out a new version of your application than using the default rolling update strategy of Kubernetes.
I hope you found this post informative and engaging. Iâd love to hear your thoughts on this post, so start a conversation on Twitter or LinkedIn :)
What Next? Now we have developed some more understanding of progressive delivery and created a canary deployment out of it. Next would be diving deeper to try the canary deployment with Analysis using Argo Rollouts, stay tuned for this post.
You can find all the parts of this Argo Rollouts Series below:
Part 1: Progressive Delivery with Argo Rollouts: Blue Green Deployment
Part 2: Progressive Delivery with Argo Rollouts: Canary Deployment
References and further reading:
Argo Rollouts
Kubernetes Banglore workshop on Argo Rollouts
Argo Rollouts â Kubernetes Progressive Delivery Controller
CICD with Argo
Originally published at https://www.infracloud.io on May 31, 2022.
Join other developers and claim your FAUN account now!
Staff Engineer, Infracloud
@ninaddesaiInfluence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.