Join us

AWS Lambda & GitlabCi — Deployment and Database Migrations with ENV Variables for Node.js Application

1_cP-91dc2zA3i43lOHxO7mg.jpeg

While working on delivering some features for an application in my daily work me and my team were facing a challenge when trying to automate the deployment of an AWS Lambda function through Gitlab CI.

By definition, the AWS Lambda functions should be a simple and small application serving one purpose and one purpose only.

This was also the case for us but the challenge was that our Lambda function had also a database that we had agreed upon during our planning. We had to store different information, and it was a lot. We couldn’t use configuration files since that would have increased the size of the application.

As some of you might know, AWS Lambda functions have some restrictions when it comes to the size of the codebase. If the extracted size of the zipped codebase is more than 128Mb the deployment will fail. This is obviously configurable but we wanted to keep the default configuration when it came to size.

This was one of the main reasons why we decided to use a database. The second reason was that the amount of data was quite big, complex, and really dynamic, meaning that it could change a lot, frequently.

We split this challenge into two groups, configuring automation of the deployment and executing migrations.

  1. Configuring Automation of the Deployment

This was the first challenge and since we were using a database, we had to configure and keep a database connection open in the application. We were using environment variables to store all the values needed for the database connection for security reasons and flexibility of configuration in all the environments.

We had to think of a way how to replace those environment variables from Gitlab CI in the YAML template file that we were using to represent the deployment through SAM and Cloud Formation. The configuration of this file is as shown below.

We had to tackle the environment variables from lines 11–14 from the configuration above. For that, to replace those lines with the correct values needed for the database connection to work we added the following command as a step before triggering the deployment in Gitlab CI configuration.

This command is creating a temporary configuration file with the correct values for the database that were retrieved from the Gitlab CI variables management. The file is available only in the Gitlab pipeline that is being run and it is not available publicly unless explicitly specified to be stored in artifacts of the job. Assuming that the Gitlab CI configuration had the following information stored in the environment variables for the database:

The temporary file that would be created after running the command that would replace the values for the environment variables from lines 11–14 would be like this:

After this template is created in the Gitlab CI pipeline, we need to trigger the deployment by using the following command by providing also all the necessary arguments and reading some more environment variables from the Gitlab CI configuration.

The final configuration of the Gitlab CI for a NodeJS application would be like this:

2. Configuring Execution of Migrations

For this second challenge, as mentioned we opted to use a database to store the relevant data needed for the functionality to work.

In our case, we opted for a MySQL database for various reasons.

We need to consider a couple of things:
* The JSON files will be uploaded on S3 as the first step of deployment by executing a typescript file in the CI.
* Exclude JSON files needed for the migrations from the deployment of AWS Lambda function due to size restrictions and in our case, those files were quite big.

We opted for Sequelize as an ORM. Each of the migrations is generated via CLI, and a new migration file is created. This new migration file is then updated manually to read the data from JSON files stored in an S3 bucket and insert those data into the respective database table.

In order to make this work and prevent getting stuck with a broken lambda function in case a migration fails, we created another lambda function for executing the migrations with its own CloudFormation stack.

This migrations lambda function is the first one that gets deployed and executes the migrations. As a first step before the deployment of this lambda function, the JSON files are uploaded to S3. Then the function is deployed and once that’s done we use the aws lambda invoke command to call the function and execute the migrations.

Below you’ll find the configuration of the lambda function. Needless to say that it also goes through the process of replacing the environment variables as explained above.

The GitLab CI configuration for this is as follows:

If the migration succeeds, we continue with the deployment of the lambda function with the functionality updates, otherwise, if it fails, the deployment never takes place.

Ps: Be careful of s3ForcePathStyle flag when configuring the S3 client to read from the bucket since it’ll cause a timeout of the function execution

Photo by Towfiqu barbhuiya on Unsplash


Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!

Avatar

Albion Bame

Staff Software Engineer, Emma - The Sleep Company

@abame
Howdy, I’m Albion, I’m a web developer living in Frankfurt am Main, Germany, originally from Albania, fan of DIY, cycling, and camping. I’m also interested in travel and reading.
User Popularity
91

Influence

9k

Total Hits

4

Posts