Introduction to Terraform:
Terraform is a tool that is used to build, change, and have the version of the infrastructure that is safe, accurate, and efficient. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like amazon web services- AWS, Microsoft Azure, and Google Cloud Platform, or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about your infrastructure drifting away from its desired configuration. If you like what you are hearing about Terraform then this course is for you!
With a terminal, a few bytes of code, and no ssh logins, we can spin up, in minutes, a production-ready Kubernetes cluster, capable of heavy workloads.
So here it comes to Terraform: a potent tool written in Go, that helps us writing, plan, creating highly predictable and stateful Infrastructures as Code.
Stage 1: Terraform State
The work of Terraform keeps the state of the managed infrastructure and its configuration. It is that by default the state is stored locally, in a JSON formatted file that is named as terraform.tfstate.From the first lift to the latest change of our infrastructure’s plan, every chunk of data populates the state file, allowing the mapping of real-world resources to the terraform configuration and to keep track of metadata. A feature that brings enormous benefits and opens to scenarios like versioning, debugging, performance monitoring, rollbacks, rolling updates, immutable deployments, traceability, self-healing capabilities, among others.
Stage 2: Terraform Backends
A “backend” in Terraform is an abstraction that determines the handling of the state and the way certain operations are executed, enabling many essential features.
Backends can store the state remotely and protect it with locks to prevent corruption; it makes it possible for a team to work with ease, or, for instance, to run Terraform within a pipeline.
Better protection for sensitive data. When the state is retrieved from a backend, it gets stored in memory and never persisted on a local drive. Even though the memory is susceptible to exploits, this makes it certainly more complicated, for bad actors, to exfiltrate or corrupt the content held within the state. However, a misconfiguration of the remote storage can equally lead to breaches.
Remote operations: For large infrastructures or when pushing specific changes, terraform apply can take quite a long time. Typically, for the entire duration of the deployment, it is not possible to disconnect the client without compromising its execution. Some of the backends available in terraforming, however, can be delegated to execute remote operations, allowing the client to go offline safely. Along with remote state storage and locking, remote operations are a big help for teams and more structured environments.
Stage 3: Configuration
Backends configuration resides in Terraform files with the HLC syntax, within the terraform section. The following, simplified, snippet shows how a remote backend can be enabled leveraging an AWS s3 bucket, where the terraform.tfstate persists.
terraform { backend “s3” { bucket = “<your_bucket_name>” key = “terraform.tfstate” region = “<your_aws_region>” } } |
When configuring a backend rather than the default for the first time, Terraform provides the option to migrate the current state to the new backend, in order to transfer the existing data and not losing any information.
It is recommended, though not mandatory, to manually back up the state before initializing any new backend by running terraform init. It is enough to copy the state file outside the scope of the project, in a different folder. The initialization process should create a backup as well.
Now it is time for coding, and demonstrate how to set up a remote backend, with a real-life example.
Stage 4: The AWS s3 Backend
Setting up a Terraform backend it is relatively easy. Let’s see how to implement one with AWS s3.
To run the code of the example, be sure to have available AWS IAM credentials with enough permissions to create/ delete s3 buckets and put bucket policies.
Let’s create a bucket, for instance in the region EU-west-1 (EU — Ireland), named terraform-backend-store. To do so, open a terminal and run the following command:
aws s3api create-bucket –bucket terraform-backend-store \ –region eu-west-1 \ –create-bucket-configuration \ LocationConstraint=eu-west-1 # Output: { “Location”: “http://terraform-backend-store.s3.amazonaws.com/” } |
Once the bucket is in place, it needs a proper configuration.
For a bucket that holds the Terraform state, it’s a good idea to enable the server-side encryption. To do so, and keeping it simple, let’s get back to the terminal and set the server-side encryption to AES256 (Although it’s out of scope for this story, I recommend to use the kms and implement a proper key rotation):
aws s3api put-bucket-encryption \ –bucket terraform-backend-store \ –server-side-encryption-configuration={\”Rules\”:[{\”ApplyServerSideEncryptionByDefault\”:{\”SSEAlgorithm\”:\”AES256\”}}]} # Output: expect none when the command is executed successfully |
Next, it’s important to restrict access to the bucket.
Create an unprivileged IAM user:
aws iam create-user –user-name terraform-deployer # Output: { “User”: { “UserName”: “terraform-deployer”, “Path”: “/”, “CreateDate”: “2019-01-27T03:20:41.270Z”, “UserId”: “AIDAIOSFODNN7EXAMPLE”, “Arn”: “arn:aws:iam::123456789012:user/terraform-deployer” } } |
AmazonS3FullAccess and AmazonDynamoDBFullAccess:
aws iam attach-user-policy –policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess –user-name terraform-deployer # Output: expect none when the command execution is successful aws iam attach-user-policy –policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess –user-name terraform-deployer # Output: expect none when the command execution is successful |
Back to our bucket now. The freshly created IAM user must be allowed to execute the necessary actions, and have the terraform backend running smoothly.
Let’s create a policy file as follows:
cat <<-EOF >> policy.json { “Statement”: [ { “Effect”: “Allow”, “Principal”: { “AWS”: “arn:aws:iam::123456789012:user/terraform-deployer” }, “Action”: “s3:*”, “Resource”: “arn:aws:s3:::terraform-remote-store” } ] } EOF |
Back to the terminal and run the command as shown below, to enforce the policy in our bucket:
aws s3api put-bucket-policy –bucket terraform-remote-store –policy file://policy.json # Output: none |
As the last step, we enable the bucket’s versioning:
aws s3api put-bucket-versioning –bucket terraform-remote-store –versioning-configuration Status=Enabled |
The AWS s3 bucket is ready, time to integrate it with Terraform. Listed below, is the minimal configuration required to set up this remote backend:
# terraform.tf terraform { backend “s3” { bucket = “terraform-remote-store” encrypt = true key = “terraform.tfstate” region = “eu-west-1” } } … # the rest of your configuration and resources to deploy |
The remote backend is ready for a ride, test it.
Locking Process
AWS DynamoDB Table, used by terraform to set and unset the locks.
We can create the table resource using terraform itself:
# create-dynamodb-lock-table.tf resource “aws_dynamodb_table” “dynamodb-terraform-state-lock” { name = “terraform-state-lock-dynamo” hash_key = “LockID” read_capacity = 20 write_capacity = 20 attribute { name = “LockID” type = “S” } tags { Name = “DynamoDB Terraform State Lock Table” } } |
and deploy it:
terraform plan -out “planfile” ; terraform apply -input=false -auto-approve “planfile” |
Once the command execution is completed, the locking mechanism must be added to our backend configuration as follow:
# terraform.tf terraform { backend “s3” { bucket = “terraform-remote-store” encrypt = true key = “terraform.tfstate” region = “eu-west-1” dynamodb_table = “terraform-state-lock-dynamo” } } … # the rest of your configuration and resources to deploy |
Conclusion:
To know more about our services or to have more commands that make your area of work easy, comment us and we will share the details with you in our next article.