Join us
@pbrissaud ă» Oct 16,2021 ă» 5 min read ă» 3131 views
How to deploy a CGE Instance with Gitlab CI using Terraform and Ansible
Today weâre going to discuss a tool that I love using every day: Gitlab. It has grown into more than just a Git repository but a veritable arsenal of DevOps technologies in a few years. We will focus more on its CI / CD pipeline, but we will use other features like its Docker image registry or its Terraform state backend. The project is quite simple: deploy and provision a GCE Instance with Terraform and Ansible automatically from a simple Git commit.
The project file will be structured as follows:
The main.tf file contains a fairly basic configuration for deploying a VM on Google Cloud. Many values ââare variable to allow flexibility when running the pipeline:
Line 22: We install python3 on the machine at startup so that Ansible can perform its actions without problems
Line 24â26: We import a public key so that Ansible can connect without a password to the virtual machine. Note that the value of the gcp_user variable must be your Google account username.
The outputs.tf file must define an output variable that will be the machineâs public IP. We will use this variable to tell Ansible on which host it should run its playbook.
The backend.tf file creates an HTTP backend for Terraform.
For Ansible, itâs even easier! Iâm just creating a playbook that will display variables called ansible_facts that Ansible will retrieve during its harvest phase by connecting to the host.
The CI / CD pipelines on gitlab.com public runners use Docker containers to run whatever commands you want, so you need a Terraform image and an Ansible image. For the first, Gitlab provides one, but, surprisingly enough, there is no official Docker image for Ansible. So we will need to create it to run our playbook.
I decided to go with an archlinux as a base image because my computer is running Manjaro, and I donât want compatibility issues with the Ansible versions. But, it is quite possible to start on another basis; there are plenty of examples on the Internet.
Either way, the Dockerfile doesnât contain anything foolish: I install the packages needed for Ansible, the jumit_xml library (youâll see why later), I turn off host checking for SSH, and I create a generic user who will run the playbook. I could have added collections or Ansible modules as well as dependencies in python if I needed to.
Weâll automatically build this image in our pipeline, upload it into the built-in registry on Gitlab, and then use it in later steps.
Here we get to the heart of the matter! I took as an example the model that Gitlab provided for a Terraform pipeline available here. From this example, I removed the job to delete the âterraform destroyâ command and put the âbuildâ job in automatic mode instead of manual.
I defined the stages of my pipeline like this:
I used Gitlabâs âneedsâ keyword to tell it which tasks are dependent on others. For example, this makes it possible to launch the jobâ planâ after the job âvalidationâ without waiting for the âlintâ job (the lint of Ansible). With this rearrangement, the pipeline can run much faster.
Also, for the same optimization purpose, I told Gitlab to run the âdocker-buildâ job only if I performed modifications on the Dockerfile.
In this pipeline, I also used the new features of Gitlab, which can now record the state of the Terraform infrastructure to have consistent builds. To do this, you only need to add a TF_ADDRESS variable indicating the address of the Terraform backend containing the state of the infrastructure:
To allow passing the variable containing the IP address of the virtual machine created in the âapplyâ job to the job running the Ansible playbook, we must use a âdotenvâ report in the âartifactsâ part of our job:
All jobs executed after this one can use the ANSIBLE_HOST environment variable.
And finally, I used the functionality of Gitlab, which allows creating JUnit test reports in its interface. I used the JUnit module to export the Ansible logs in the correct format, hence installing the junit_xml library. Next, you must define the environment variables ANSIBLE_STDOUT_CALLBACK and JUNIT_OUTPUT_DIR before calling the playbook. In the âartifactsâ part, we specify the path of the results files in XML format.
You can see the final content of the .gitlab-ci.yml file in this gist
First of all, we will generate an SSH key pair so that Ansible can connect to the GCloud VM with the following command:
In addition, on Gitlab, we must enter the environment variables of our CI / CD pipeline. I used the TF_VAR_ {var} syntax to declare the Terraform variables; it will then take them into account at runtime. Remember to specify the File type for the SSH keys and the GCloud credentials file:
We can now push our code and see the pipeline run automatically! On our GCloud console, we can see the virtual machine created with the name of the project. You can navigate several menus on Gitlab to see the performed actions.
We can download the state of our Terraform infrastructure or even block it to prevent it from being modified.
We can consult all our Ansible playbookâs actions, their status, and detailed logs.
Join other developers and claim your FAUN account now!
Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.