Avatar

Building scalable and safe infrastructure layout with Terraform

← Back to list
Posted on 10.12.2022
Last updated on 04.12.2024
Image by Ricardo Gomez Angel on Unsplash
Refill!

Table of contents

In every progressive organisation infrastructure is defined as code and maintained automatically. If your devops are still clicking the cloud provider's GUI, then they are years behind.

The most famous tool for managing the infrastructure is Terraform. However, as a cloud native project typically has many microservices, the way the infra is defined should be scalable and safe. Let me share the way infrastructure is allocated in my projects. I can't say this way is the best one, but it can definitely be labeled as quite effective.

# Repository

First of all, for storing the infra I typically create a separate repository. Some keep the repo definition along the application source code, but in my opinion this is not effective in case if there are multiple microservices in the cloud sharing common resources.

# File structure

The repository contains a set of files and folders, that schematically can be described as follows:

terraform
base
terraform.tf
variables.tf
output.tf
...
modules
[service_name]
terraform.tf
variables.tf
data.tf
output.tf
...
[env]
base
terraform.tf
variables.tf
data.tf
output.tf
...
common
[zone_name]
terraform.tf
variables.tf
data.tf
output.tf
...
services
[service_name]
[zone_name]
terraform.tf
data.tf
[region_name].tf
The code is licensed under the MIT license

As you can see, every sub-folder has it's own terraform.tf file.

Indeed, every folder declares a separated terraform setup. The benefits of such solution lie on a surface:

  • Every team has it's own scope they work in. The ownership over the resources is thus separated, which leads to safer environment.
  • Terraform apply and plan are executed faster and safer, without any danger of affecting other resources.
  • Clear separation between "system" resources and "team" resources.

Let me quickly explain the purpose of each folder.

  • base contains global resources, common for all environments, regions and zones. It may include IAM, different variables common for the entire project, etc.
  • [modules]
    • [service_name] contains definitions of resources for every particular service of the cloud-native application.
  • [env] contains the environment-specific resources, can be live, stg, qa, etc.
    • base is located inside of every [env] folder and contains the environment-specific, but zone- and region-agnostic resources.
    • common
      • [zone_name] contains zone-specific resources, such as, for instance, k8s clusters setup
    • services
      • [service_name]
        • [zone_name] sub-folder contains region- and zone-based call of a particular service module.

# Basic files

Every setup may have the following files:

# terraform.tf

Defines the terraform setup and the backend for it. Rule number one: store your terraform state remotely, and enable backups. One wise man said to me once: "If you loose your terraform state, you pretty much wanna kill yourself."

The key property always equals to the path of the corresponding folder: for infra it is infra, for live/base it is live/base, and so on.

👉 📃  terraform.tf
terraform {
backend "s3" {
bucket = "my-project-terraform-state"
key = "infra"
region = "eu-central-1"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
The code is licensed under the MIT license

# variables.tf

Contains input variables and locals.

👉 📃  variables.tf
locals {
region = "eu-central-1"
godaddy_dns_ip = ["76.76.21.21"]
}
variable "whatever" {
type = string
}
The code is licensed under the MIT license

# output.tf

Contains the output, should there be any.

👉 📃  output.tf
output "output_name" {
value = someresource.someoutput
}
The code is licensed under the MIT license

# data.tf

And yeh, here is the best part coming. You may ask "If every part is isolated, how can I forward the output of one setup into another?"

Remember that key property we defined? Since the state is stored remotely, we can address it, knowing it's key and the name of the output.

So, if we want to reference another terraform setup, just define the data resource:

👉 📃  data.tf
data "terraform_remote_state" "infra" {
backend = "s3"
config = {
bucket = "my-project-terraform-state"
key = "infra"
region = "eu-central-1"
}
}
The code is licensed under the MIT license

And then anywhere in the scope of this setup, reference it's output:

some_prop = data.terraform_remote_state.infra.outputs.output_name

# Region-based service deployments

As it was already said above, I keep every service definition in a separate module. That's why we can multiply the resources on regional and zone basis by introducing files such as [env]/services/[service_name]/[zone_name]/[region_name].tf:

👉 📃  live/services/some_application/eu/eu-central-1.tf
module "some_application_live_eu_central_1" {
source = "../../../../modules/some-application"
environment = "live"
region = "eu-central-1"
some_prop = data.terraform_remote_state.infra.outputs.output_name
}
The code is licensed under the MIT license

# Initializing and applying changes

In every folder I must run terraform separately:

$
terraform init
terraform plan
terraform apply
The code is licensed under the MIT license

To apply changes granularly for a specific region, the target argument could be used:

$
terraform apply -target=module.some_application_live_eu_central_1
The code is licensed under the MIT license

# Going further

In case if Kubernetes is used, along with the terraform/ folder the helm/ folder may also be created. There you can keep your Helm charts, and every time a non-k8s resource is updated, terraform local_file resource could be used to reflect the resource changes in your charts.

The updated charts then can be picked up by Spinnaker for deployment.

This is all, folks! Hope this was helpful :)


Avatar

Sergei Gannochenko

Business-oriented fullstack engineer, in ❤️ with Tech.
Golang, React, TypeScript, Docker, AWS, Jamstack.
19+ years in dev.