6

HAWK-Rust Series: Automate Infrastructure using Terraform

 2 years ago
source link: https://blog.knoldus.com/hawk-series-automate-infrastructure-using-terraform/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

HAWK-Rust Series: Automate Infrastructure using Terraform

Reading Time: 3 minutes

HAWK is a Rust based Image Recognition project, which implements a two-factor authentication by using the RFID card for user identification and Image for user validation.
In this project, we have used AWS services and the whole AWS infrastructure required by this project is automated using Terraform (a tool for building, changing, and versioning infrastructure safely and efficiently).

HAWK with Terraform

Terraform enables a user to define and provision a data-center infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL) and it is used to create, manage, and update infrastructure resources and almost any infrastructure type can be represented as a resource in Terraform.
It provides some features like :

  • Infrastructure as Code
  • Execution Plan
  • Resource Graph
  • Change Automation

Terraform Configuration:

The set of files used to describe infrastructure in Terraform is simply known as a Terraform configuration or Terraform scripts.
So, let’s start with the basic example of creating an S3 bucket and uploading object into it.

provider "aws" {
  access_key = "accesskey"
  secret_key = "secretkey"
  region     = "region"
}

resource "aws_s3_bucket" "example" {
  bucket = "knoldus-test-bucket"
}

resource "aws_s3_bucket_object" "image-file" {
  bucket = "${aws_s3_bucket.example.id}"
  key    = "image"
  source = "hawk.jpg"
  etag   = "${md5(file("hawk.jpg"))}"
}

In the above configuration or script, we have used two modules :
* provider
* resource

provider:

A provider is responsible for understanding API interactions and exposing resources.
Providers generally are:
IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack),
PaaS (e.g. Heroku), or
SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

In the above example, we have used AWS which belongs to IaaS.
The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS and the provider needs to be configured with the proper credentials before it can be used.

resource:

Resources are the most important element in the Terraform language and each resource block describes one or more infrastructure objects.

We have used some resources which are:
=> aws_s3_bucket and
=> aws_s3_bucket_object

In aws_s3_bucket, there are some fields like :
> resource id (“example”) : Id for the resource
> bucket (“knoldus-test-bucket”) : name of the bucket

In aws_s3_bucket_object,there are some parameters like :
> resource id(“image-file”) – id for the resource
> bucket (“${aws_s3_bucket.example.id}”) – name of the bucket, (where object will upload)
> key (“image”)- name for the object
> source (“hawk.jpg”)- path of the object
> etag (“${md5(file(“hawk.jpg”))}”) – encoding technique

Terraform in HAWK

HAWK used some services of AWS like:

  • Lambda function
  • S3 Bucket
  • API-Gateway and

HAWK never let a user manually configure all the services of AWS, that’s why we wrote Terraform scripts to automate the infrastructure of the AWS and these scripts provide us the whole infrastructure of the AWS in just one go by running some Terraform commands like:

terraform init -> terraform plan -> terraform apply

terraform init

This is the first command to run for a new configuration, which initializes various local settings and data that will be used by subsequent commands and this command will automatically download and install any Provider binary for the providers in use within the configuration.

terraform plan

This is the second command to run, which is used to create an execution plan.

terraform apply

It is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by an execution plan.

HAWK Terraform Srcipts flow:

Terraform scripts flow used by HAWK

Here we wrote some scripts like:

  • iam.tf
  • bucket.tf
  • lambda.tf and
  • api-gateway.tf

iam.tf:

For creating IAM Role and attach policy on it.

bucket.tf

Creates two buckets (clicked image bucket and reference bukcket) and add policies on it, after setting policies jar file is uploaded to bucket.

lambda.tf

lambda.tf creates a lambda fucntion and configure lambda by providing IAM role, setting environment variables and upload jar file.

api-gateway.tf

api-gateway.tf creates a gateway of lambda fucntion for outer wolrd by providing end-point for the lambda.

Thank You for reading this blog.
For more reference and contributing to our open source project Hawk, you can visit here.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK