Complete end-to-end Terraform Pipeline!

Introduction: This blog will guide you to create an automation script that will automate the complete infrastructure and works as an infrastructure as a code (IAAS). First of all, Terraform is explained in-depth, then the automation script is explained which will automate the creation of key-pair, security-groups, ec2-instance, EBS-volume creation, attachment of EBS volume, mounting of EBS volume, downloading code from Github to host on the webserver hosted on AWS instance, creation of S3 bucket, adding data in that S3 bucket, & finally creating a CloudFront distribution for that S3 bucket!

Harshit Dawar
5 min readJun 14, 2020

Ever wondered about launching the whole infrastructure in just one command, yes, it seems to be impossible, but it is possible. In this blog, I will guide you to construct the complete Pipeline described in the Introduction section of this blog.

The future is really about automation, it is a fact, it can be proved even by looking in the daily life examples, each & every one of us desires the things which are automated. No one wants to wait for even a second, everyone requires the work to be completed in lightning-fast speed, which is not possible if the work is being done manually. Therefore to achieve lightning-fast speed, work has to done automatically.

Without automation, today’s world can’t survive, in the present situation almost every company uses automation. For example, consider banks, they send the messages for each transaction, that process is automated right, it is not done manually. There are tons of examples present like these, & I am sure that most of you are experiencing these examples in your daily life.

Prerequisite for this Pipeline to Implement

  • Some knowledge of AWS & Github.
  • It would be a plus point for you if you are having the knowledge of JSON, because, in this Pipeline, HCL (Hashi Configuration Language) has been used which is the native language for Terraform, & it is very similar to JSON.

Explanation of Terraform

It is a tool developed by HashiCorp. Link for it is given below:

It is a tool that has the capability to handle the multi-cloud setup. It is a standardized tool, i.e. we have to write the code in one language only, but it will be automatically used to interact with multiple clouds (technically known as Terraform Providers). Its providers are listed in its documentation, the link for the same is present above.

Terraform works on declarative language, i.e. we just have to tell it what has to be done, it will automatically look into the situation & do that thing for us.

Terraform is intelligent because of the plugins for each of the providers it has, using them as API, it can interact with any of its providers.

Now, you are having more than enough knowledge required for this Pipeline, let’s begin the explanation of the complete automation Pipeline of Terraform.

List of steps in the Pipeline

Steps in the Pipeline:

  1. Setting up the Terraform Provider.
  2. Creation of Key-Pair for AWS Instance.
  3. Creation of Security-Group for AWS Instance.
  4. Creating an EC2 instance.
  5. Entering into EC2 Instance to install some software.
  6. Creating another EBS volume & attaching it to the EC2 instance.
  7. Mounting the EBS volume to the EC2 instance.
  8. Creating an S3 bucket.
  9. Uploading data into the S3 bucket.
  10. Creating an AWS CloudFront distribution for that data uploaded into the S3 bucket.

Code of the End-to-End Pipeline

In this pipeline, its explanation for each of the part is present in the code shown below with the help of comments for each part.

Step 1: Setting up the Terraform Provider.

Creating a Provider for the Terraform

Step 2: Creation of Key-Pair for AWS Instance.

Creating a public key!
Key Pair created in AWS!

Step 3: Creation of Security-Group for AWS Instance.

Creating a security group!
Security Group created in AWS!

Step 4 & 5: “Creating an EC2 instance” & “Entering into EC2 Instance to install some software”!

An instance is created & Software[git] is installed into it!
EC2 Instance is created & software is installed in it, verification for the same can be done by accessing the EC2 instance by whichever way is possible. For example SSH

Step 6: Creating another EBS volume & attaching it to the EC2 instance.

Volume created by Terraform!
Proof of Volume Attached to the AWS EC2 instance, it can be verified by the name of the instance and the device name of the EBS volume used above!

Step 7: Mounting the EBS volume to the EC2 instance.

External EBS volume created will be mounted to the EC2 instance in the folder “/var/www/html”!
Drive mounted on the same AWS Instance!

Step 8: Creating an S3 bucket.

Code to create an S3 bucket!
S3 Bucket Created!

Step 9: Uploading data into the S3 bucket.

Code to upload data to S3!
File Uploaded by Terraform!

Step 10: Creating an AWS CloudFront distribution for that data uploaded into the S3 bucket.

CloudFront Distribution Code!

In the above code, I have explained all the important features for creating CloudFront distribution through comments, the features which I didn’t explain, are not necessary as of now. Copy them exactly the same!

CloudFront Distribution Created by Terraform!

If the code shown till here is combined in one single file and been executed then it becomes the complete infrastructure as a code (IAAC).

This concludes our Pipeline!

Important Commands & Facts to run this code

  • You should have Terraform installed in your system!
  • After copying this code in a file, save that file with the “.tf” extension.
  • Run “terraform init” command.
  • Then run “terraform apply” to create your complete infrastructure!
  • Finally, when your work is completed, destroy your environment with the command “terraform destroy”.

I hope my article explains each and everything related to the complete end-to-end Terraform-DevOps-Github-Cloud Pipeline along with the explanation of the code, & configuration & execution of code. Thank you so much for investing your time in reading my blog & boosting your knowledge!

--

--

Harshit Dawar

AIOPS Engineer, have a demonstrated history of delivering large and complex projects. 14x Globally Certified. Rare & authentic content publisher.