In this article, we will learn how to easily set up Argo CD as an app of apps with Helm, deploy applications with Argo CD and subsequently manage these applications.
Argo CD? Helm? What are these things?
We need Argo CD because application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish. In this tutorial, we are mostly re-using existing charts (including one of ours). To learn how to build and publish your own charts to GitHub pages, you can check out this article.
Now we know them, what do we want to do with them?
We want to set up Argo CD on our Kubernetes cluster, and then use it to deploy other applications following the steps below:
Creating a Kubernetes Cluster (on GKE)
Configure project info
Create a cluster (autopilot in this case)
The cluster name here is ‘demo-cluster’, in the ‘us-west1' region, and the project is ‘dev-workloads.
You can log in to your cloud console environment to see the progress of the deployment.
Note: The cluster can also be created using the cloud environment’s UI (but where’s the fun in that?).
Deploy Argo CD to our Kubernetes cluster
Connect to demo-cluster
Install ArgoCD using its Helm Chart
Typically, we would install ArgoCD by applying its manifest and following other instructions here, but instead, we’ll use Helm to install ArgoCD because we love Helm and it makes our lives super easier (check out why in the Helm section above).
Add Argo CD Helm repository
Install Argo CD
You can add the ‘— create-namespace’ flag and ‘-n <namespace>’ to install Argo CD in a new namespace or ‘-n <namespace>’ to do so in an existing namespace.
Expose the service
By default, the Argo CD API server is not exposed with an external IP. To access the API server, we can use change our server service to a load balancer, use an Ingress, or simply port forward. They all have their strengths and weaknesses (you can check them out).In this tutorial, we will be changing our service to a load balancer.
Now we can visit the external IP to see ArgoCD’s pretty landing page.
Login to Argo CD
Fetch ArgoCD’s default password and login (username is ‘admin’).
Connect ArgoCD to our Git repository
We’ll set up a git repository on Github and point our Argo CD configuration to a directory/folder in this repo. The orientation here is that, for the sake of maintainability, we can add manifests to this directory and Argo CD will automatically deploy these apps.
Picture it this way, we add a root application resource, and every other app is a child of this app. The root application will generate manifests for other applications, ArgoCD will watch this application and synchronize any application generated by it (Don’t think about it too much).
This way, we need to only add one application manually, and yes, that’s the root application.
Setup Git repository
In this tutorial, we will use an existing repository but you can create a new one if you need to. What is important is having a folder for all your Argo CD manifests.
Connect ArgoCD to our git repo
To get your SSH key, use:
Find more info on SSH here.
Setup Argo CD as an app of apps
Create the root application
Deploy other applications using their Helm charts
Traditionally, in Argo CD, we can deploy applications with configurations in manifests that are the same as those we run ‘kubectl apply’ on. However, in our case, we use Helm charts (because it’s easier).
This is usually a 3 step process:
The steps are the same for all apps we want to deploy, except there are special requirements sometimes.
Example: Deploying Nginx
A standard Nginx chart can be found here.
Most of these properties are common to Kubernetes deployment manifests.
Our synchronization is done automatically since we set it that way.
Click the ‘nginx’ app
We can see traffic (network) information using the network view
We can visit the Load Balancer IP address
Navigate back to applications and click the root application
From the image above, we can see that the ‘nginx’ application is represented as a child of the ‘root’ application. This is a basic representation of the ‘app of apps’ concept.
From here on out, your deployment process is basically on autopilot (on some level). All we’ll keep doing is add manifests and maybe customize them if we need to. Let’s add Grafana and Prometheus apps.
Example 2: Deploying Grafana
That’s it. Let’s do the same for Prometheus (you can try to do this one yourself before following the instructions below).
Example 3: Deploying Prometheus
The error here is because of a GKE autopilot permission we don’t have, don’t bother.
In this article, we were able to learn:
TBH, one way or another, something will work. — Timothy Olaleke
Suggested things to explore: