Automation of AWS cloud Infrastructure using Terraform

f

This article will help us to understand how to spin up instances in AWS using the Infrastructure as a Code tool Terraform. Firstly we’ve to know what is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire data center. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform can determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

it is very difficult for a person to remember both the commands and syntax of public and private cloud and To set up the same infrastructure multiple times within a single in a click we go for Terraform. Terraform makes us easy and simple.

Task description

Have to create/launch Application using Terraform

1. Create the key and security group which allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. A developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Before that, I’m gonna tell you some basics about Terraform we make use of the following the commands

terraform init - To install the required plugins 
terraform apply - To make the resources run
terraform plan - is used to create an execution plan
terraform validate - To check the code
terraform destroy - To destroy all the resources in single click

Creating the separate folder for web page code and in that create terraform file with extension .tf and after initializing the terraform file so that it can download the required plugins for that particular folder.

terraform initialisation

Let’s go by each step to understand the task in the simplest manner

Step1

We need to specify the region and profile name for setting up the provider in Terraform. Which logins to the AWS account to perform actions.

#providerprovider "aws" {
profile = "default"
region = "ap-south-1"
}

Step2

Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so there is a firewall which protects from outside for connecting we need to configure the TCP settings which Allow connecting to ports for SSH and HTTP port number 80

#creating securitygroupresource "aws_security_group" "SG" {
name = "SG"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-4aeaf522"
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ping"
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "SG"
}
}
security group which allows port 80

Step3

Launching an instance with created key pair and security group and to connect into the instance we need to specify the path of the key and public_ip of instance. And installing httpd, PHP, git to deploy a webpage.

#ec2 instance launchresource "aws_instance" "terra" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
security_groups = [ "SG" ]
key_name = "k1"

connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/SATHVIKA KOLISETTY/downloads/k1.pem")
host = aws_instance.terra.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd"
]
}
tags = {
Name = "terra"
}
}
ec2 instance

Step4

Creating the “ebs-volume” block storage of 1 GB for attaching it with instance so the whatever data is uploaded can be kept as persistent

# create volume
resource "aws_ebs_volume" "web_vol" {
availability_zone = aws_instance.terra.availability_zone
size = 1
tags = {
Name = "web_vol"
}
}
ebs volume of 1GiB

Step5

# attach volumeresource "aws_volume_attachment" "web_vol" {depends_on = [
aws_ebs_volume.web_vol,
]
device_name = "/dev/xvdf"
volume_id = aws_ebs_volume.web_vol.id
instance_id = aws_instance.terra.id
force_detach = true

Step6

For storing the data in the ebs- volume we need to first create a partition in the ami then need to format and then mounting the ebs to /var/www/html/ and after that git is being installed automatically it will download the code by cloning it.

connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/SATHVIKA KOLISETTY/downloads/k1.pem")
host = aws_instance.terra.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdf",
"sudo mount /dev/xvdf /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/satvikakolisetty/cloudwitjenkins.git /var/www/html/"
]
}

Step7

Creating the S3 bucket for storing the images so it can be used by the public as we can use CloudFront. And adding object to s3 bucket.

# s3 bucketresource "aws_s3_bucket" "s3bucket" {
bucket = "123mywebbucket321"
acl = "public-read"
region = "ap-south-1"
tags = {
Name = "123mywebbucket321"
}
}
# adding object to s3resource "aws_s3_bucket_object" "image-upload" {depends_on = [
aws_s3_bucket.s3bucket,
]
bucket = aws_s3_bucket.s3bucket.bucket
key = "hybrid.png"
source = "C:/Users/SATHVIKA KOLISETTY/desktop/terra/hybrid.png"
acl = "public-read"
}

Step8

Creating cloud front distribution for instance

# cloud frontvariable "oid" {
type = string
default = "S3-"
}
locals {
s3_origin_id = "${var.oid}${aws_s3_bucket.s3bucket.id}"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_s3_bucket_object.image-upload,
]
origin {
domain_name = aws_s3_bucket.s3bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}

Step 9

output "bucketid" {
value = aws_s3_bucket.s3bucket.bucket
}
output "myos_ip" {
value = aws_instance.terra.public_ip
}

Step10

Connecting to the instance and deploying image of s3 bucket to the var/www/html and then it automatically opens on the google chrome browser

connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/SATHVIKA KOLISETTY/downloads/k1.pem")
host = aws_instance.terra.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su <<END",
"echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' height='200' width='200'>\" >> /var/www/html/satvi.html",
"END",
]
}
}
resource "null_resource" "openwebsite" {
depends_on = [
aws_cloudfront_distribution.s3_distribution, aws_volume_attachment.web_vol
]
provisioner "local-exec" {
command = "start chrome http://${aws_instance.terra.public_ip}/"
}
}

We can perform the above task using Jenkins as well.

Before integrating it with Jenkins you have to changes some configurations in your instance. Here I am using Jenkins Master-Slave Method for the Automation where as soon as the developer pushes some file, it will automatically be copied in /var/www/html folder of the EC2 instance.

Before that, you need to connect to the EC2 instance via SSH and switch to the root account.

First, you have to log in to your root by using

sudo su - root

Go to the ssh configuration file and apply the following changes.

vi /etc/ssh/sshd_config

Remove the comment tag before the following lines

PermitRootLogin yesPasswordAuthentication yes

Now set the password for your root account and restart ssh services

passwd root
service sshd restart

Now we have to create Jenkin slave node for that we have to Go->Manage Jenkins->Manage node and cloud->New Node. On the top of slave node we will run our Jenkin job