Join us

Creating your first module using Terraform


Deploying infrastructure manually is an outdated practice. Using Terraform to automate manual deployment is the new normal. In this blog, we have explained in detail, how to create your first module using Terraform.

Before the advent of cloud and DevOps, most companies managed and deployed their infrastructure manually. This used to be risky because not only was it error-prone, it also slowed down the entire infrastructure cycle. The good news is that now, most companies are not deploying infrastructure manually but instead using tools like Terraform. In this blog, we are going to cover Terraform and the use of terraform modules.

Introduction to Infrastructure as Code(IaC)

The key idea behind Infrastructure as Code(IaC) is to manage almost ‘everything’ as code, where everything involves your servers, network devices, databases, application configuration, automated tests, deployment process, etc. This consists of every stage of your infrastructure lifecycle, starting from defining, deploying, updating, and destroying. The advantage of defining every resource as IaC is you can now version control it, reuse it, validate it and build a self-service model in your organization.

Intro to Terraform and how it fits into IaC space

Terraform is an open-source tool written in Go language, created by HashiCorp and is used to provision or manage infrastructure as code. It supports multiple providers like AWS, Google Cloud, Azure, Openstack, etc. To know about the complete list of providers, check this link:

Now that you have a brief idea about Terraform, let's understand how Terraform fits into IaC space and how it's different from other tools(Chef, Puppet, Ansible, CloudFormation) in its space. Some of the key differences are:

  • Ansible, Chef, and Puppet are configuration management tools(used to push/pull configuration changes), whereas Terraform is used to provision infrastructure. Conversely, you can use configuration management to build infrastructure and Terraform to run configuration scripts, but that is not ideal. The better approach is to use them in conjunction, for example, Terraform to build infrastructure and then run Puppet on the newly built infrastructure to configure it.
  • The next significant difference is mutable vs. immutable infrastructure. Terraform creates immutable infrastructure, which means every time you push changes via Terraform, it builds an entirely new resource. On the other hand, if changes are pushed via Puppet, it will update the existing software version, leading to configuration drift in the long run.
  • Another difference is open source vs. proprietary; Terraform is an open-source tool and works with almost all the major providers, as we discussed above, whereas tools like CloudFormation are specific to AWS and are proprietary.

Terraform Installation

Installing Terraform is pretty straightforward as it comes with a single binary and you need to choose the binary depending upon your platform using this link:

  • Download the binary(in case of Mac)
  • Unzip the package
  • Copy the binary on your Operating system path
    echo $PATH

    sudo cp terraform /opt/homebrew/bin
  • Logout and login from the terminal and verify the terraform installation
    terraform version
    Terraform v1.0.6

How Terraform Works

Terraform works by making an API call on your behalf to the provider(AWS, GCP, Azure, etc.) you defined. Now to make an API call, it first needs to be authenticated, and that is done with the help of API keys(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). To create an IAM user and its corresponding keys, please check this doc:

Now how much permission users have will be defined with the help of an IAM policy. To attach an existing policy to the user, check this doc:

Now in order to use these keys, you can export these as environment variables

$ export AWS_ACCESS_KEY_ID="abcxxx"
$ export AWS_SECRET_ACCESS_KEY="xyzasdd"

There are other ways to configure these credentials; check this doc for more info:

Now the next question, how does Terraform know which API to call? This is where you need to define the code in a Terraform configuration file(ends typically with .tf). These configuration files are the code in Infrastructure as Code(IaC).

How Terraform helps in creating immutable infrastructure using state files

Every time you run Terraform, it records the information about your infrastructure in a terraform state file(terraform.tfstate). This file stores information in json format and contains a mapping of your terraform resource in your configuration file vs. the real world in AWS cloud. Now when you run the terraform command, it fetches the resource's latest status, compares it with the tfstate file, and determines what changes need to be applied. If Terraform sees a drift, it will re-create or modify the resource.

Note: As you can see, this file is critically important. It is always good to store this file in some remote storage, for example, S3. In this way, your team member should have access to the same state file. Also, to avoid race conditions, i.e., two team members running terraform simultaneously and updating the state file, it's a good idea to apply locking, for example, via DynamoDB. For more information on how to do that, please check this doc:

Introduction to Terraform module

Terraform module is a set of Terraform configuration files (*.tf) in a directory. The advantage of using modules is reusability. You can use the terraform module available in the terraform registry or share the module you created with your team members.

Writing your first terraform code

With all the prerequisites in place(configuring aws credentials access and secret keys), it’s time to write the first terraform code. Before we start writing our first terraform code, let's see how we are going to organize the files:

  • This is our main configuration file where we are going to define our resource definition.
  • This is the file where we are going to define our variables.
  • This file contains output definitions for our resources.

NOTE: Filename doesn’t have any special meaning for terraform as long as it ends with .tf extension, but this is a standard naming convention followed in terraform community.

Let first start with

  • This will tell Terraform that we will use AWS as a provider, and we want to deploy our infrastructure in the us-west-2(Oregon) region.
  • AWS has data centers all over the world, which are grouped in region and availability zones. Region is a separate geographic area(Oregon, Virginia, Sydney) and each region has multiple isolated data centers(us-west-2a,us-west-2b..). For more info about regions and availability zones, please refer to this doc:
  • The next step is to define the resource we want to create, and in this example, we will build an EC2 instance. The general syntax of creating a resource in terraform looks like this:

  • PROVIDER is the name of the provider, ‘AWS’ in this case
  • TYPE is the type of resource we want to create, for example, instance
  • NAME is the name of the identifier we are going to use throughout our terraform code to refer
  • CONFIG is the argument that is specific to the particular resource

Now that you understand the syntax for creating a resource, it’s time to write our first terraform code.

  • Here we are using the aws_instance resource to create an ec2 instance. ec2-instance is the name of the identifier we will use in the rest of the code. For more information about aws_instance resource, please check this link:
  • ami: ami stands for amazon machine image and is used to run an ec2 instance. In this case, we are using Amazon Linux ami, but please feel free to use ami based on your requirement. For more information about ami, please check this link:
  • Instance_type: AWS provides different instance types based on workload requirements. For example, the t2.micro instance provides 1GB of Memory and 1 Virtual CPU. For more information about instance type, please check this link:
  • Vpc_security_group_ids: Here, we are using interpolation to refer back to the security group. Here we are getting the security group id by referencing the aws_security_group resource and using mysg identifier.

In the next section, we create a security group using the aws_security_group resource that allows inbound traffic on port 22.

  • Name: This is the name of a security group. If you omit it, terraform will assign some random unique name.
  • Description: This is a description of the security group. If you don’t assign any value, then the default value of “Managed by Terraform” is set.
  • Vpc_id: This is the Virtual Private Cloud id of your AWS account where you want to create this security group.
  • Ingress: In this block, you will define what port you want to allow for incoming connections.
  • From_port: This is the start range for port
  • To_port: This is the end range of the port
  • Protocol: The protocol for the port range
  • Cidr_block: List of CIDR blocks from where you want to allow traffic
  • Tags: Assigning tags to the resource. Tags are a great way to identify resources.

As you can see in the above code, we are hardcoding the value of ami id, instance type, port, and vpc id. Later on, if we need to change these values, we must modify our main configuration file It is much better to store these values in a separate file, and that is what we are going to do in the next step by storing all these variables and their definition in a separate file, Syntax of terraform variables look like this:

So if you need to define a variable for ami id, it looks like this:

  • name: ami_id is the name of the variable, and it can be any name
  • default: There are several ways you can pass value to the variable, for example, via environment variable, using -var option. If no value is specified, then it will use the default value.

Our after modifying these values will look like this:

To reference these values in we just need to add var in front of the variable.

ami = "var.ami_id"

Final file will look like this:

The last file we are going to check is, whose syntax will look like this:

Where NAME is the name of the output variable and VALUE can be any terraform expression that we would like to be the output.

Now the question is, why do we need it? Take a look with the help of a simple example; when we create this ec2 instance, we don't want to go back to the aws console to grab its public IP; in this case, we can provide the IP address to an output variable.

In the example, we refer to aws_instance resource, ec2-instance identifier, and public_ip attribute. To get more information about the exported attributes, please check this link:

Similarly to get the id of the security group:

Now our terraform code is ready, the first command we are going to execute is

  • terraform init: It’s going to download code for a provider(aws) that we will use.
  • terraform fmt: This command is optional but recommended. This will rewrite terraform configuration files in a canonical format.
  • terraform plan: This command will tell what terraform does before making any changes.
    1: (+ sign): Resource going to be created
    2: (- sign): Resource going to be deleted
    3: (~ sign): Resource going to be modified
  • terraform apply: To apply the changes, run terraform apply command.

Terraform is reading code and translating it to API calls to providers(aws in this case). Go to your AWS console and you will see your instance should be in the creating stage.

Figure 9: EC2 instance via AWS Console

NOTE: If you are executing these commands in a test environment and want to save cost, run terraform destroy command to clean up infrastructure.

Converting terraform code into a module with the help of AWS EC2

In the above example, we have created our first terraform code, now convert this code into modules. Syntax of the module will look like this:

  • NAME: The name of the identifier that you can use throughout your terraform code to refer to this module.
  • SOURCE: This is the path where the module code can be found
  • CONFIG: It consists of one or more arguments that are specific to that module.

Let's understand this with the help of an example. Create a directory ec2-instance and move all the *.tf files(, and inside it.

mkdir ec2-instance
mv *.tf ec2-instance

Now in the main directory create a file, so your directory structure will look like this:


ls -ltr drwxr-xr-x 5 plakhera staff 160 Sep 12 18:55 ec2-instance -rw-r--r-- 1 plakhera staff 0 Sep 12 18:57

Our module code file will look like this:

  • ec2-instance is the name of the module, and as a source its referencing the directory we have created earlier(where we moved all the *.tf files)
  • Next, we are referencing all the values of variables that we have defined inside the This is the advantage of using modules, as now we don’t need to go inside the to modify the value, and we have one single place where we can refer and modify it. after the change will look like this:

How to version your modules and how it helps in creating a separate environment(Production vs. Staging)

In the previous example, when we created a module, we gave it a location of our local filesystem under the source. But in a real production environment, we can refer to it from a remote location, for example, GitHub, where we can even version control it.

source = “”

If you again check the previous example, we use instance type as t2.micro, which is good for test or development environments but may not be suitable for production environments. To overcome this problem, what you can do is tag your module. For example, all the odd tags are for the development environment, and all the even tags are for the production environment.

For development:

$ git tag -a "v0.0.1" -m "Creating ec2-instance module for development environment"
$ git push --follow-tags

For Production:

$ git tag -a "v0.0.2" -m "Creating ec2-instance module for production environment"
$ git push --follow-tags

This is how your module code will look like for the Production environment with changes made under source and instance_type.

Terraform registry

In the previous step, we have created our own module. If someone in the company needs to build up an ec2 instance, they shouldn’t write the same terraform code from scratch. Software development encourages the practice where we can reuse the code. To reuse the code, most programming languages encourage developers to push the code to centralize the registry. For example, in Python, we have pip, and in node.js, we have npm. In the case of terraform, the centralized registry is called Terraform registry, which acts as a central repository for module sharing and makes it easier to reuse and discover:


As you have learned, creating modules in Terraform requires minimal effort. By creating modules, we build reusable components in the form of IaC, but we can also version control it. Each module change before pushing to production will go through a code review and automated process. You can create a separate module for each environment and safely roll it back in case of any issue.

Squadcast is an incident management tool that’s purpose-built for SRE. Your team can get rid of unwanted alerts, receive relevant notifications, work in collaboration using the virtual incident war rooms, and use automated tools like runbooks to eliminate toil. Start now for Free.

Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!


Squadcast Inc

Squadcast is a cloud-based software designed around Site Reliability Engineering (SRE) practices with best-of-breed Incident Management & On-call Scheduling capabilities.
User Popularity



Total Hits