Terraform Deployment using Docker Compose
So you’re probably wondering…why exactly would I use Docker Compose to deploy my terraform resources? I would say this is a great question. You’re better off typing terraform apply
and deploying into your preferred cloud. But what about an air-gapped network that only has docker and docker compose installed?
I’ve been in this situation, where due to previous programs and personnel changes, the environment used for deploying critical applications was lacking in the necessary tools i.e. Terraform. As the DevOps engineer, it was my job to make sure that the artifacts generated from the Gitlab CI/CD Pipeline were transferred over to the OPS team in the air-gapped environment and address any issues that came out of testing.
Gameplan Link to heading
To get started, we need to understand the architecture of this deployment strategy. We can divide this section into the Terraform Container Image and the Docker Compose Environment. For the Terraform Container Image, we need access to the internet as well as a public AWS account for us to pull the terraform provider and modules and use an S3 bucket for initialization. I’m going to assume you have your AWS credentials (Access key and Secret key) set up but if not you can use the aws configure
command.
We also need to take into consideration passing variables. Because the terraform resources are in individual containers, any export variables that other resources need will have to stored separately. For this, we will be using terraform outputs and redis. Teraform outputs will output the particular values that other resources need and we can store them in Redis as a key-value store. From there, the other containers can do a redis GET
on the output value and run an export TF_VAR_<name>
to export the variable.
Terraform Base Image Link to heading
Let’s create our Dockerfile for the Terraform Base Image first. You can use the touch
command to create a file called Dockerfile. I’m using the latest version of alpine to install terraform and the redis cli.
FROM alpine:latest
For installing packages in alpine, we use the apk add
command. To install the redis-cli, we can write apk --update add redis
in the Dockerfile. I’m also installing bash as the default shell for alpine so that we can create an ENTRYPOINT into the container.
RUN apk --update add redis bash
To install terraform in alpine, we need to get a tar file release from the hashicorp website. We can see those releases here. To download a specific release, I am going to use the wget
command. Putting it together, the command would be
RUN wget https://releases.hashicorp.com/terraform/1.10.5/terraform_1.10.5_linux_amd64.zip
I’m downloading version 1.10.5 because at the time of this article, this is the latest version of terraform.
Once we get the zip file, we need to extract it and put it in the /usr/local/bin
directory which is in our PATH. We can also remove the zip file when we’re done. I’m also creating a directory where all of our terraform files will reside.
RUN unzip terraform_1.10.5_linux_amd64.zip
RUN mv terraform /usr/local/bin
RUN rm terraform_1.10.5_linux_amd64.zip
WORKDIR /terraform
To put it all together, the Dockerfile would look like this:
FROM alpine:latest
RUN apk --update add redis bash
RUN wget https://releases.hashicorp.com/terraform/1.10.5/terraform_1.10.5_linux_amd64.zip
RUN unzip terraform_1.10.5_linux_amd64.zip
RUN mv terraform /usr/local/bin
RUN rm terraform_1.10.5_linux_amd64.zip
WORKDIR /terraform
To build this particular image, we can use the docker build
command with a tag of terraform:1.10.5.
docker build -t terraform:1.10.5 .
Terraform Initialization Link to heading
Because we are deploying into an air-gapped environment, we will not be able to properly initialize the provider or get any module updates. These items need to be downloaded first and placed inside the container image.
The simplest way to get the provider details and the modules is to perform a terraform init
but we need to format our terraform configuration file with some common fields and use certain flags on the terraform init
command.
I like to put the terraform configurations in a file called terraform.tf.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.84.0"
}
}
backend "s3" {
key = "<RESOURCE NAME>.tfstate"
use_lockfile = true
encrypt = true
}
}
provider "aws" {
# Configuration options
}
Each resource would need to have a similar file with its own custom key name. If you notice, I do not set the bucket name nor the region. I will be using the flags in the terraform init
command to set the backend configuration for the bucket and the region. The main reason is in the air-gapped environment, I don’t know what the bucket name is nor do I know what region they are using. I also don’t want the operations team to have to go through the terraform.tf file and change those parameters. We can go ahead and save this file.
Initialize our terraform resources to download the provider and the modules. The command is:
terraform init -backend-config="bucket=<S3 BUCKET>" -backend-config="region=<REGION>"
where we need to give it an S3 Bucket and a region. You want to execute this command in an environment that has access to the internet and on a public cloud account. You can perform a terraform apply
command if you want to test out your resources.
Terraform Resource Image Link to heading
Lets create another Dockerfile to add the terraform resources into the image. We need to copy the .tf files and the .terraform directory. One thing to note is I am not copying any .tfvars files. We will mount it in the docker compose file. This allows us to customize the .tfvars file locations as well as the names of the tfvars files.
FROM terraform:1.10.5
COPY *.tf .
COPY .terraform .terraform
Go ahead and build the container image using
docker build -t <NAME>:<TAG> .
To double check if the files are inside the container, we need to run the image we just created. We can do that with a docker run
command. I’m going to add some flags to help with cleanup and creating an interactive session inside the container.
docker run --rm -it <NAME>:<TAG>
Since we used the base alpine image, we should be taken to the /terraform directory. Type ls -a
to see all the files. You should see the .terraform directory and all the necessary .tf files. If you really want to test out the container, type the same terraform init command and then perform a terraform plan to see the output. Don’t forget to export any Terraform export variables if your resources call for.
Docker Compose Link to heading
Create a file called docker-compose.yml.
touch docker-compose.yml
This file will contain the necessary details for executing our terraform resources. Let’s start off by writing the redis image and our networking. For our terraform containers to talk to the redis container, we need to set up IP addresses for the containers. We can use the default cidr of /24 which contains about 253 usable ip addresses but depending on how many container images get created via docker compose, we can calculate a smaller number of usable hosts in a different subnet. For the purposes of this example, lets use 10.10.10.0/24 as our IP Address range.
To set up the networking in the docker compose file, we need to add a network block.
networks:
terraform-network:
ipam:
config:
- subnet: "10.10.10.0/24"
We’re creating a network block called terraform-network. By using the IPAM (IP Address Management) we can set static IP Addresses for our Terraform resources. For the resources, we need to add it to the terraform-network by giving it an IP Address within that subnet.
Setting up redis is super simple. We need to create a services block for redis and the remaining terraform resources underneath. Make sure that the services block lines up with the networks block.
services:
redis:
image: redis:latest
container_name: redis
networks:
terraform-network:
ipv4_address: 10.10.10.2
For any Terraform resources, we can have it follow the template below:
service_name:
image: <TERRAFORM_IMAGE>
container_name: <CONTAINER_NAME>
networks:
terraform-network:
ipv4_address: 10.10.10.3 # Or whatever ending octal you want to use
entrypoint: ["/bin/bash", "-c"]
volumes:
- type: bind
source: <TFVARS LOCATION>
target: /terraform/<TFVARS NAME>.tfvars
depends_on:
- redis
command:
- |
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=
# Example of REDIS CLI GET
export TF_VAR_<variable>=$(redis-cli -h 10.10.10.2 -p 6379 GET <var>)
terraform init -backend-config="bucket=<S3 BUCKET BACKEND>" -backend-config="region=<REGION>" -reconfigure -upgrade
terraform apply --auto-approve -var-file=<TFVARS NAME>.tfvars
# Example of REDIS CLI SET
redis-cli -h 10.10.10.2 -p 6379 SET <var> $(terraform output -raw <TF OUTPUT>)
Let’s break down this section of code.
- service_name: Refers to the name of the particular docker compose service. We can name it to the terraform resource (i.e. s3 terraform resource can have a service name of s3_service).
- container_name: Refers to the container name that gets created from docker compose.
- entrypoint: The entrypoint that allows us to run a set of commands inside the container image. This gives us the ability to run
terraform init
andterraform apply
and other bash commands. - volumes: This is where we are going to mount our tfvars file for this specific Terraform resource. Remember, we created a working directory called /terraform where all of our terraform files reside. This is where we are going to bind mount that specific file.
- depends_on: For all resources at the initial stage, we want a dependency on the redis service. If the service does not come up for any reason, the remaining services should be terminated and return an exit code. However, this field is also useful for orchestration of subsequent resources in a sequential pattern.
- command: We can execute a series of commands to create our terraform resource. Notice the beginning line of “- |”. This means we have multiple commands that need to be executed.
Command Breakdown Link to heading
The first three lines are specifically for connecting to AWS. There are two methods for storing your AWS credentials, exporting them as environment variables or using the aws configure
command. Because our base image does not have the AWS CLI installed, we can export our credentials as environment variables.
The two terraform commands are used for initializing the terraform providers and modules and applying them to AWS.
With redis, there are two main commands we are interested in: GET and SET. For exporting a Terraform Environment Variable, we need to use the redis-cli GET
command on the specific variable.
export TF_VAR_<variable>=$(redis-cli -h 10.10.10.2 -p 6379 GET <var>)
Notice how we pass in the IP Address of the redis container as well as its default port 6379 with the -h
flag for host.
ENV File Link to heading
We can take the docker compose file one step further by using a dotenv file. This environment file is a key-value file that docker compose uses to setup its environment. For example, instead of storing the AWS Credentials like the Access Key, Secret Key and the region we can store them in a .env file. We can even store the location of the tfvars or even the image and its tag in the env file. Please note that docker compose is strictly looking for a dotenv file with in the fashion of .env. If you have another file that you want to use as an environment file, you need to pass in the --env-file
flag when running docker compose
.
touch .env
Example Link to heading
In this example, I am going to create an S3 bucket image and an SSM image. The S3 Bucket will output the s3 bucket name and have it stored in redis and the SSM resource will get that value from redis and have it exported as a Terraform Variable. Then we use docker compose to orchestrate the creation of these images
S3 Image Link to heading
Terraform Configuration File Link to heading
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.84.0"
}
}
backend "s3" {
key = "s3.tfstate" # Can name your statefile whatever you want
use_lockfile = true
encrypt = true
}
}
provider "aws" {
# Configuration options
}
Terraform Main File Link to heading
module "s3-bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "4.5.0"
bucket = var.s3_bucket_name
force_destroy = true
}
Terraform Variable File Link to heading
variable "s3_bucket_name" {
type = string
default = "my-tf-s3-bucket"
description = "The name of the s3 bucket"
}
Terraform TFVARS File Link to heading
s3_bucket_name = "my-tf-s3-bucket-vinny"
Terraform Outputs File Link to heading
output "s3_bucket_arn" {
value = module.s3-bucket.s3_bucket_arn
}
output "s3_bucket_id" {
value = module.s3-bucket.s3_bucket_id
}
output "s3_bucket_region" {
value = module.s3-bucket.s3_bucket_region
}
SSM Image Link to heading
Terraform Configuration File Link to heading
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.84.0"
}
}
backend "s3" {
key = "ssm.tfstate"
use_lockfile = true
encrypt = true
}
}
provider "aws" {
# Configuration options
}
Terraform Main File Link to heading
module "ssm-parameter" {
source = "terraform-aws-modules/ssm-parameter/aws"
version = "1.1.2"
name = "my-s3-ssm-parameter"
value = jsonencode({
"Bucket" : "${var.s3_bucket_id}",
"Region" : "${var.s3_bucket_region}",
"ARN" : "${var.s3_bucket_arn}"
})
}
Terraform Variable File Link to heading
variable "s3_bucket_id" {
type = string
description = "The Name of the S3 Bucket"
default = "my-s3-bucket"
}
variable "s3_bucket_region" {
type = string
description = "The region where the S3 Bucket resides"
default = "us-east-1"
}
variable "s3_bucket_arn" {
type = string
description = "The ARN of the S3 Bucket"
default = ""
}
Docker Compose File Link to heading
services:
redis:
image: redis:latest
container_name: redis
networks:
terraform-network:
ipv4_address: 10.10.10.2
s3_bucket:
image: ${S3_IMAGE}
container_name: s3_bucket
networks:
terraform-network:
ipv4_address: 10.10.10.3
entrypoint: ["/bin/bash", "-c"]
volumes:
- type: bind
source: ${S3_TFVARS}
target: /terraform/variables.tfvars
depends_on:
- redis
command:
- |
export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
terraform init -backend-config="bucket=${S3_BUCKET_BACKEND}" -backend-config="region=${AWS_DEFAULT_REGION}"
terraform apply --auto-approve -var-file=variables.tfvars
redis-cli -h 10.10.10.2 -p 6379 SET TF_VAR_s3_bucket_arn $(terraform output -raw s3_bucket_arn)
redis-cli -h 10.10.10.2 -p 6379 SET TF_VAR_s3_bucket_id $(terraform output -raw s3_bucket_id)
redis-cli -h 10.10.10.2 -p 6379 SET TF_VAR_s3_bucket_region $(terraform output -raw s3_bucket_region)
ssm:
image: ${SSM_IMAGE}
container_name: ssm
networks:
terraform-network:
ipv4_address: 10.10.10.4
entrypoint: ["/bin/bash", "-c"]
depends_on:
s3_bucket:
condition: service_completed_successfully
command:
- |
export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
export TF_VAR_s3_bucket_arn=$(redis-cli -h 10.10.10.2 -p 6379 GET TF_VAR_s3_bucket_arn)
export TF_VAR_s3_bucket_id=$(redis-cli -h 10.10.10.2 -p 6379 GET TF_VAR_s3_bucket_id)
export TF_VAR_s3_bucket_region=$(redis-cli -h 10.10.10.2 -p 6379 GET TF_VAR_s3_bucket_region)
terraform init -backend-config="bucket=${S3_BUCKET_BACKEND}" -backend-config="region=${AWS_DEFAULT_REGION}"
terraform apply --auto-approve
networks:
terraform-network:
ipam:
config:
- subnet: "10.10.10.0/24"
The ENV File is:
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=
S3_BUCKET_BACKEND=
S3_IMAGE=
S3_TFVARS=
SSM_IMAGE=
We see in the s3_bucket service, a depends_on with redis and in the ssm service, a depends_on with s3. However, this dependency has a condition where the service has to complete. That way the ssm service can pull the three Terraform environment variables from redis.
Docker Commands Link to heading
To build the docker images of the terraform resources, use
docker build -t s3:<TAG> .
docker build -t ssm:<TAG> .
To run the docker-compose file, run
docker compose up -d
This command uses the environment file .env.
Conclusion Link to heading
And that’s pretty much it! We covered creating a Terraform base image from alpine, creating Terraform resource images, and using docker compose to orchestrate the Terraform resources creation with the help of redis.
Resources Link to heading
Dockerfile for Base Terraform
FROM alpine:latest
RUN apk --update add redis bash
RUN wget https://releases.hashicorp.com/terraform/1.10.5/terraform_1.10.5_linux_amd64.zip
RUN unzip terraform_1.10.5_linux_amd64.zip
RUN mv terraform /usr/local/bin
RUN rm terraform_1.10.5_linux_amd64.zip
WORKDIR /terraform
Run docker build -t terraform:1.10.5 .
on the Dockerfile above.
Dockerfile for Terraform Resource
FROM terraform:1.10.5
COPY *.tf .
COPY .terraform .terraform
Run docker build -t <TF RESOURCE>:<TAG> .
Dockerfile for Terraform Resource Orchestration
services:
redis:
image: redis:latest
container_name: redis
networks:
terraform-network:
ipv4_address: 10.10.10.2
networks:
terraform-network:
ipam:
config:
- subnet: "10.10.10.0/24"