Cloud Automation Using TerraForm

Image for post
Image for post

This article will help us to understand how to spin up instances in AWS using the Infrastructure as a Code tool Terraform. Firstly we’ve to know what is Terraform?

Terraform is an open-source infrastructure-as-a-code software tool created by HashiCorp. It enables users to define and provision a data-center infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON. Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, IBM Cloud (formerly BlueMix), Google Cloud Platform, Digital Ocean, Linode, Microsoft Azure, Oracle Cloud Infrastructure, OVH, Scaleway, VMware vSphere or Open Telekom cloud as well as Open-Nebula and Open-Stack.

HashiCorp also supports a Terraform Module Registry launched in 2017 during HashiConf 2017 conferences. In 2019 Terraform introduced the paid version called Terraform Enterprise for larger organizations. Terraform has four major commands: Terraform init, Terraform Plan, Terraform Apply, Terraform Destroy.

Terraform has a great set of features that make it worth adding to your tool belt, including:

  • Friendly custom syntax, but also has support for JSON.
  • Visibility into changes before they actually happen.
  • Built-in graphing feature to visualize the infrastructure.
  • Understands resource relationships. One example is failures are isolated to dependent resources while non-dependent resources still get created, updated, or destroyed.
  • An open-source project with a community of thousands of contributors who add features and updates.
  • The ability to break down the configuration into smaller chunks for better organization, re-use, and maintainability. The last part of this article goes into this feature in detail.
  1. Create a Security group that allows the port 80.
  2. Launch EC2 instance.
  3. In this EC2 instance use the existing key or provided key and security group which we have created in step 1.
  4. Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into /var/www/html
  5. Developers have uploaded the code into GitHub repository.
  6. Copy the github repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the image into the S3 bucket and change the permission to public readable.
  8. Create a Cloudfront using an S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Let's get started

So this is an updated task with respect to my task 1 of AWS. Here I have created the same setup as that of the previous ones but with a small difference here I have integrated using EFS instead of EBS.

Amazon Elastic File System provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations.

The major difference between the both is EBS can only be accessed by only a single instance at a time. Where’s using the EFS you can access multiple Amazon instances at the same time

Before moving on to the task we need to know some basics about terraform

terraform init - To install the required plugins 
terraform apply - To make the resources run
terraform plan - is used to create an execution plan
terraform validate - To check the code
terraform destroy - To destroy all the resources in single click

Creating the separate folder for web page code and in that create terraform file with extension .tf and after initializing with terraform init the terraform file so that it can download the required plugins for that particular folder.

Before that Login to your AWS profile using your CLI and fill down the necessary credentials.

Image for post
Image for post

All the necessary plugins will be downloaded which belongs to the terraform provider and the profile of AWS.

provider "aws" {
profile = "satvi"
region = "ap-south-1"
}

Since we are using it for the first time so we need to initialize the code using the following command

terraform init
Image for post
Image for post

Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so there is a firewall which protects from outside for connecting we need to configure the TCP settings here I’m giving access to the SSH, HTTPS, NFS services with their respective port numbers as 22, 80, 2049.

# -- Creating Security Groupsresource "aws_security_group" "sg" {
name = "task2-sg"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-f6829f9e"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "task2-sg"
}
}
Image for post
Image for post

Launching an instance with created key pair and security group and to connect into the instance we need to specify the path of the key and public_ip of instance. And will be launching my provisioner “remote-exec” that will start working once my instances are launched and will download and install all the required packages

# -- Creating Ec2 instanceresource "aws_instance" "web_server" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
root_block_device {
volume_type = "gp2"
delete_on_termination = true
}
key_name = "key22"
security_groups = [ "${aws_security_group.sg.name}" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "task2_os"
}
}
Image for post
Image for post

Now we will be creating our EFS and for that, we require VPC which will contact VP at the backend but since we haven’t mentioned it so we will go for the default one. And once it gets created then we will create a mount we will clone all the required data from the Github and then we will mount our EFS to /var/www/Html directory.

# -- Creating EFS volumeresource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "Efs"
}
}
# -- Mounting the EFS volumeresource "aws_efs_mount_target" "efs-mount" {
depends_on = [
aws_instance.web_server,
aws_security_group.sg,
aws_efs_file_system.efs,
]

file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_instance.web_server.subnet_id
security_groups = ["${aws_security_group.sg.id}"]


connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html",
"sudo echo '${aws_efs_file_system.efs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/satvikakolisetty/cloudtask2.git /var/www/html/"
]
}
}
Image for post
Image for post
Image for post
Image for post

Now I will create an S3 bucket and upload my image to it in the same availability zone.

# -- Creating S3 Bucketresource "aws_s3_bucket" "mybucket" {
bucket = "satvi112233"
acl = "public-read"
region = "ap-south-1"
tags = {
Name = "satvi112233"
}
}
# -- Uploading files in S3 bucketresource "aws_s3_bucket_object" "file_upload" {
depends_on = [
aws_s3_bucket.mybucket,azxccvgh i
]
bucket = "satvi112233"
key = "hybrid.png"
source = "C:/Users/sathvikakolisetty/Desktop/terraform/hybrid.png"
acl ="public-read"
}
Image for post
Image for post

In the last step, we will create the cloud-front that will collect all my data from the S3 bucket and reach my client through the nearest edge locations whenever any client will hit to my site.

resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_efs_mount_target.efs-mount,
aws_s3_bucket_object.file_upload,
]
origin {
domain_name = "${aws_s3_bucket.mybucket.bucket}.s3.amazonaws.com"
origin_id = "ak"
}
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
default_cache_behavior {
allowed_methods = ["HEAD", "GET"]
cached_methods = ["HEAD", "GET"]
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
target_origin_id = "ak"
viewer_protocol_policy = "allow-all"
}
price_class = "PriceClass_All"viewer_certificate {
cloudfront_default_certificate = true
}
}
# -- Updating cloudfront_url to main lacation
resource "null_resource" "nullremote3" {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
Image for post
Image for post

Connecting to the instance and deploying image of s3 bucket to the var/www/html and then it automatically opens on the google chrome browser

connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo su <<END",
"echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.file_upload.key}' height='1000' width='250'>\" >> /var/www/html/index.html",
"END",
]
}
}
# -- Starting chrome for outputresource "null_resource" "nulllocal1" {
depends_on = [
null_resource.nullremote3,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.web_server.public_ip}/index.html"
}
}
Image for post
Image for post

Now we are done with all our steps required and to create our setup just create the complete code and run the following commands and then now our entire setup will be ready

$ terraform plan #check your code

$ terraform apply -auto-approve #run the cluster created

$ terraform destroy -auto-apply #destroy the cluster created
Image for post
Image for post
Image for post
Image for post

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store