Join us
@pbrissaud ă» Oct 16,2021 ă» 5 min read ă» 2200 views
How to deploy a CGE Instance with Gitlab CI using Terraform and Ansible
Today weâre going to discuss a tool that I love using every day: Gitlab. It has grown into more than just a Git repository but a veritable arsenal of DevOps technologies in a few years. We will focus more on its CI / CD pipeline, but we will use other features like its Docker image registry or its Terraform state backend. The project is quite simple: deploy and provision a GCE Instance with Terraform and Ansible automatically from a simple Git commit.
The project file will be structured as follows:
The main.tf file contains a fairly basic configuration for deploying a VM on Google Cloud. Many values ââare variable to allow flexibility when running the pipeline:
variable "project" { }
variable "credentials_file" { }
variable "gcp_user" { }
variable "ssh_public_key" { }
variable "vm_name" { default = "virtualmachine" }
variable "vm_type" { default = "f1-micro" }
variable "region" { default = "europe-west1"}
variable "zone" { default = "europe-west1-b"}
variable "image" { default = "ubuntu-os-cloud/ubuntu-2004-focal-v20210623" }
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.5.0"
}
}
}
provider "google" {
credentials = file(var.credentials_file)
project = var.project
region = var.region
zone = var.zone
}
resource "google_compute_instance" "vm_instance" {
name = var.vm_name
machine_type = var.vm_type
tags = ["http-server"]
metadata_startup_script = "sudo apt-get update ; sudo apt-get install -yq python3"
metadata = {
ssh-keys = "${var.gcp_user}:${file(var.ssh_public_key)}"
}
boot_disk {
initialize_params {
image = var.image
}
}
network_interface {
network = "default"
access_config {
}
}
}
Line 22: We install python3 on the machine at startup so that Ansible can perform its actions without problems
Line 24â26: We import a public key so that Ansible can connect without a password to the virtual machine. Note that the value of the gcp_user variable must be your Google account username.
The outputs.tf file must define an output variable that will be the machineâs public IP. We will use this variable to tell Ansible on which host it should run its playbook.
output "ip" {
value = google_compute_instance.vm_instance.network_interface.0.access_config.0.nat_ip
}
The backend.tf file creates an HTTP backend for Terraform.
terraform {
backend "http" {
}
}
For Ansible, itâs even easier! Iâm just creating a playbook that will display variables called ansible_facts that Ansible will retrieve during its harvest phase by connecting to the host.
- hosts: all
become: true
tasks:
- name: Test
debug:
msg: "{{ ansible_facts }}"
The CI / CD pipelines on gitlab.com public runners use Docker containers to run whatever commands you want, so you need a Terraform image and an Ansible image. For the first, Gitlab provides one, but, surprisingly enough, there is no official Docker image for Ansible. So we will need to create it to run our playbook.
I decided to go with an archlinux as a base image because my computer is running Manjaro, and I donât want compatibility issues with the Ansible versions. But, it is quite possible to start on another basis; there are plenty of examples on the Internet.
FROM archlinux:latest
RUN pacman -Syu --noconfirm \
git \
ansible \
sshpass \
python3 \
python-pip
RUN pip install junit_xml
# Disable strict host checking
ENV ANSIBLE_HOST_KEY_CHECKING=False
WORKDIR /ansible
RUN useradd -ms /bin/bash ansible
USER ansible
Either way, the Dockerfile doesnât contain anything foolish: I install the packages needed for Ansible, the jumit_xml library (youâll see why later), I turn off host checking for SSH, and I create a generic user who will run the playbook. I could have added collections or Ansible modules as well as dependencies in python if I needed to.
Weâll automatically build this image in our pipeline, upload it into the built-in registry on Gitlab, and then use it in later steps.
Here we get to the heart of the matter! I took as an example the model that Gitlab provided for a Terraform pipeline available here. From this example, I removed the job to delete the âterraform destroyâ command and put the âbuildâ job in automatic mode instead of manual.
I defined the stages of my pipeline like this:
I used Gitlabâs âneedsâ keyword to tell it which tasks are dependent on others. For example, this makes it possible to launch the jobâ planâ after the job âvalidationâ without waiting for the âlintâ job (the lint of Ansible). With this rearrangement, the pipeline can run much faster.
Also, for the same optimization purpose, I told Gitlab to run the âdocker-buildâ job only if I performed modifications on the Dockerfile.
build-image:
stage: preparation
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
rules:
- changes:
- Dockerfile
In this pipeline, I also used the new features of Gitlab, which can now record the state of the Terraform infrastructure to have consistent builds. To do this, you only need to add a TF_ADDRESS variable indicating the address of the Terraform backend containing the state of the infrastructure:
variables:
... #Other variables
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
To allow passing the variable containing the IP address of the virtual machine created in the âapplyâ job to the job running the Ansible playbook, we must use a âdotenvâ report in the âartifactsâ part of our job:
apply:
stage: deploying
before_script:
- cd ${TF_ROOT}
script:
- gitlab-terraform apply
- echo "ANSIBLE_HOST=$(gitlab-terraform output ip | tr -d '\"')" > $CI_PROJECT_DIR/terraform.env
environment:
name: production
dependencies:
- plan
needs:
- plan
artifacts:
reports:
dotenv: terraform.env
All jobs executed after this one can use the ANSIBLE_HOST environment variable.
And finally, I used the functionality of Gitlab, which allows creating JUnit test reports in its interface. I used the JUnit module to export the Ansible logs in the correct format, hence installing the junit_xml library. Next, you must define the environment variables ANSIBLE_STDOUT_CALLBACK and JUNIT_OUTPUT_DIR before calling the playbook. In the âartifactsâ part, we specify the path of the results files in XML format.
run-playbook:
stage: provisioning
image: "$CI_REGISTRY_IMAGE"
before_script:
- cd ${ANSIBLE_ROOT}
script:
- ANSIBLE_STDOUT_CALLBACK=junit JUNIT_OUTPUT_DIR="${CI_PROJECT_DIR}/results" ansible-playbook -i $ANSIBLE_HOST, -u $TF_VAR_gcp_user main.yml -e ansible_ssh_private_key_file=${SSH_PRIVATE_KEY}
environment:
name: production
needs:
- linting
- apply
artifacts:
when: always
paths:
- results/*.xml
reports:
junit: results/*.xml
You can see the final content of the .gitlab-ci.yml file in this gist
First of all, we will generate an SSH key pair so that Ansible can connect to the GCloud VM with the following command:
ssh-keygen -t rsa -C « your_email@example.com » -f ~/.ssh/gitlab-ci-gcloud
In addition, on Gitlab, we must enter the environment variables of our CI / CD pipeline. I used the TF_VAR_ {var} syntax to declare the Terraform variables; it will then take them into account at runtime. Remember to specify the File type for the SSH keys and the GCloud credentials file:
We can now push our code and see the pipeline run automatically! On our GCloud console, we can see the virtual machine created with the name of the project. You can navigate several menus on Gitlab to see the performed actions.
We can download the state of our Terraform infrastructure or even block it to prevent it from being modified.
We can consult all our Ansible playbookâs actions, their status, and detailed logs.
Join other developers and claim your FAUN account now!
Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.