It's pretty simple to configure EKS Fargate for running Tekton Pipelines, saves you the hassle of having to run nodes just for CI/CD pipelines.
Continuous Delivery is hard business! Specially if you’re dealing with microservices. While Jenkins does work pretty well unto a scale by creating shared libraries of sorts for common builds, but after a while when you’re running your SaaS on microservices like we do at Digité, managing the builds, and the infrastructure for CI/CD can get cumbersome. It is for both optimized Cloud Infra usage and ability to easily write and maintain CD pipelines that we considered moving to .
Having said that, blocking two extra large VM for the “what if there are too many jobs running in parallel?” does not appear natural to me; so I set out at making Tekton work in Fargate. The reason behind Fargate is the ease of server-less thereby letting us concentrate of managing our CI/CD pipelines without having to manage the Infrastructure for it. Hence, i’ll share my experience on how to get a Server-less CI/CD Infrastructure for Tekton up and running quickly via Terraform in this post.
Let’s start with creating a Terraform module for installation of Tekton to Fargate, you can refer to this article for creating a basic setup of EKS Fargate Cluster. Assuming you have that in place, the next steps are as follows.
We’ll first create the Fargate profile for running Tekton, Tekton Dashboard and Tekton Triggers in the tekton-pipelines namespace
EFS is the recommended approach by AWS when it comes to mounting PV for Fargate nodes; hence, we’ll add EFS configuration in the next steps.
It’s a good practice to restrict EFS access to the VPC running EKS Cluster and your internal network for IAM controlled users to access it over AWS CLI. Declare a security group with Ingress rules for each of the subnet CIDR of the VPC running EKS Fargate to restrict access.
While Fargate auto-installs the EFS CSI Driver, we still have to declare an IAM policy for the cluster EFS access. Here’s how to do it in our Terraform module.
With that done, we’ll define the Cluster IAM for EFS Access. First the policy document which details access the policy statements for the role.
then we add the role
And then we map it to a service account
With the IAM linked service account in place, we’ll define the EFS file system
And its mount targets and storage class
Note the EFS and access point IDs in the terrafrom output whne appying these changes, they’ll be used in the PV and PVC definitions. My scripts gave the output
It’s pretty simple from here on; the following command installs Tekton
followed by Tekton dashboard (read-only install)
after downloading the Read Only YAML from this GitHub link. Next we setup the persistent volume, refer to the generated EFS IDs from Terraform run in your PV definition, here’s an example for a PV and PVC that will be used by a maven task for running tekton pipeline.
While the Tekton installation itself doesn’t change (you’re using a kubectl apply command as always), we have to be aware of how Fargate profiles are applies for any workloads to run on EKS Fargate and thereby provision a Fargate profile using existing Tekton annotations as its selectors so that our tasks can run on Fargate. Other than that we have to provision and configure PV and PVC via EFS for tasks to use them at runtime.
With those in place we have a working Tekton installation over EKS Fargate with a truly on-demand way of running builds and CI/CD Pipelines.