Create a Gitlab Pipeline to Push Container Images to ECR
Introduction Link to heading
In a DevOps pipeline, we take our source code through the various stages of build, test, and deploy. In the build stage, we run our unit tests, make commands, or whatever commands we need to run to compile the source code and generate executables. With the exectuables we do our regression testing for checking functionality. Finally, we take the artifacts and create container images for deployment into our development, pre-production, and production environments. The goal is to automate this as much as possible and to perform our operations as code. This article should serve as a guide for creating our container registry in AWS using the Elastic Container Registry, creating the container images and pushing those images into ECR using podman, and the gitlab-ci.yml file that orchestrates the entire operation. The article will not touch how to build the executables but simply use those executables for creating an application image to be pushed into ECR.
For this guide, I would suggest you create a gitlab user in AWS and generate keys both ACCESS_KEY and SECRET_KEY. Also note the region you want to use for creating the ECR repos. We will be using Terraform specifically the Terraform ECR Module to build out our ECR repo. Finally, we will use the terraform and podman image in our gitlab pipeline to perform the necessary operations. For the specific runners, I am using Kubernetes. I won’t go into detail on how to setup Kubernetes runners in this article.
Elastic Container Registry Link to heading
ECR or Elastic Container Registry is a container registry hosted on Amazon Web Services. ECR is similar to dockerhub where you can create public and private repositories. ECR is easy to implement and can allow for container scanning. We can also apply lifecycle policies for automatic cleanup and removal of old images.
Creating ECR in Terraform Link to heading
We’re going to use Terraform to create our Elastic Container Registry repo. I will be using S3 as the terraform backend for this example. The terraform module I will be using is the terraform-aws-modules located here. As of writing this article, the version of the ECR module is 2.3.1. Let’s create a basic ECR repo with the sample lifecycle policy stated in the module documentation. Please note, that if you want to add a level of customization I would highly recommend using a variables.tf and a variables.tfvars file. For this ECR, all the values will be defined.
First we need to create a terraform configuration file. I like to call these files terraform.tf. Inside this file will contain the AWS provider as well as the S3 backend details.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.86.0"
}
}
backend "s3" {
key = "ecr.tfstate"
use_lockfile = true
encrypt = true
}
}
provider "aws" {
# Configuration options
}
If you notice in the configuration file, I do not specify the region nor do I specify the s3 bucket. This is because I want these values to be dynamic and coming from the pipeline as a variable. In the Gitlab console, we can setup CICD variables that will be passed into the .gitlab-ci.yml file. To dynamically configure the s3 bucket and the region, we can use the flag -backend-config="bucket=<S3 BUCKET>"
and -backend-config="region=<REGION>"
on the terraform init
command.
For the ECR code, I’m going to put it in a file call ecr.tf. For the code, I copied it from the module example of creating a Private Repository.
module "ecr" {
source = "terraform-aws-modules/ecr/aws"
repository_name = "my-private-ecr"
repository_lifecycle_policy = jsonencode({
rules = [
{
rulePriority = 1,
description = "Keep last 30 images",
selection = {
tagStatus = "tagged",
tagPrefixList = ["v"],
countType = "imageCountMoreThan",
countNumber = 30
},
action = {
type = "expire"
}
}
]
})
tags = {
Terraform = "true"
Environment = "dev"
}
}
I’m using a dummy name like my-private-ecr but feel free to change the name. The repository lifecycle on this ECR repo states to keep the last 30 images. This cleans up your ECR repo and prevents it from getting too big. 30 images may see a bit too much so feel free to change that as well.
Finally, lets create a file called outputs.tf. To have seemless integration within the pipeline, we need to output the Repository URL that gets generated from the module outputs. Podman will use this to push images into that repo. This will be passed as an environment file down to the podman job in the pipeline.
output "repository_url" {
value = module.ecr.repository_url
}
To test this out, lets initialize and apply the resource. When we’re done we can delete the ECR resource. Run these command in the directory where the terraform files are located. I’m going to deploy into the us-east-1 region.
terraform init -backend-config="bucket=<MY S3 TF BUCKET>" -backend-config="region=us-east-1"
terraform apply # Verify the ECR repo has applied your lifecycle policy
terraform destroy
Podman Link to heading
Now that we have created our ECR repositories, we need to be able to create images and push them to our ECR repo via the gitlab pipeline. The best way to do this is via a Podman container image that has the AWS CLI installed. Within the podman image, we need to start the service and export the DOCKER_HOST.
Dockerfile Link to heading
For the docker image, lets use the one from quay.io
.
FROM quay.io/podman/stable:latest
Now as the root user, we need to install the AWS CLI. The AWS website states we can download the AWS CLI via curl
command.
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
RUN rm awscliv2.zip # Remove the zip file
The Dockerfile should look like the following:
FROM quay.io/podman/stable:latest
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
RUN rm awscliv2.zip # Remove the zip file
We can build this image with either docker or podman build.
docker build -t podman:aws .
podman build -t podman:aws -f Dockerfile
Gitlab Pipeline Link to heading
We created our HCL code to build our ECR repo and created the image with podman and aws cli. Now we need to create our .gitlab-ci.yml
file for the gitlab pipeline.
touch .gitlab-ci.yml
In the pipeline, we can orchestrate the pipeline in two stages: pre and build. The pre stage will initialize and create the ECR repo in AWS. The build stage will login to the ECR repo, build the image, and push the image to ECR.
Gitlab CI/CD Variables Link to heading
Gitlab provides CI/CD Variables that can be inserted into the pipeline. This helps us by keeping the pipeline dynamic with custom values for the AWS Access Key ID, AWS Secret Access Key, Region, ECR name, etc.

Select your project and scroll down to the Settings tab. Look for the CI/CD tab.

Drop down the Variables tab and click Add variable.

We can add our Gitlab CICD Variables. Start off by adding your AWS Access Key ID, AWS Secret Access Key, AWS_DEFAULT_REGION, and TF_S3_BACKEND for the Terraform S3 Backend.
To reference these variables in the pipeline we use the ${VARIABLE NAME}
statement.
Gitlab Container Registry Link to heading
Since we created our custom podman image with aws cli, we have to reference it in the pipeline. To do that, we need to push our custom podman image into Gitlab’s Container Registry.

Navigate to the Container Registry by going to Deploy -> Container Registry. You should see a list of docker commands. You can replace the word docker with podman.
NOTE: If you are using podman to login to your gitlab registry, you might need to add the flag --tls-verify=false
in case any tls issues come up.
Once you have login to your gitlab registry, we need to build our podman image with the correct tag. Take the second command provided on the gitlab page and simply add a /podman:awscli
after your gitlab project and repo. Now you should be able to push your image to your gitlab container registry. Again, if you are using podman to push your container image, you might need to add the flag --tls-verify=false
to avoid any tls issues.
Gitlab-CI File Link to heading
With our podman container image in our gitlab container registry, we can begin to write our pipeline. As stated previously, we are going to have two stages: pre and build. Lets write those stages at the top of our pipeline.
stages:
- pre
- build
Now we can create our job names. For this article, I’m going with setup_ECR and build_image
setup_ECR:
stage: pre
image: hashicorp/terraform
before_script:
script:
build_image:
stage: build
image: registry.gitlab.com/<PROJECT>/<REPO>/podman:aws
needs:
before_script:
script:
This is the main skeleton for our gitlab pipeline file. We filled out the stages and the images for our pipeline jobs to use. Notice in the build_image job we have a needs. We are planning on taking the URL of the ECR repo and having the podman job login to the ECR repo. To do that we need to use an environment file that exports that particular variable.
Lets add some details to the setup_ECR job. In the before_script, we need to export the AWS Access Key and Secret Key and the default region. We added those values to our Gitlab CI Variables earlier so we can access them as ${GITLAB VARIABLE NAME}
.
before_script:
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
Now lets add some details to the script section of the setup_ECR job. We want to perform two main operations, terraform init
and terraform apply
.
script:
- terraform init -backend-config="bucket=${TF_S3_BUCKET}" -backend-config="region=${AWS_DEFAULT_REGION}"
- terraform apply --auto-approve
When we were creating our terraform.tf file, we we didn’t specify the bucket nor the region. When we perform a terraform init
, we have to use the -backend-config=
flag to setup the bucket and the region. Once we initialize terraform with the provider and the modules, we can apply it to AWS.
Once terraform has finished applying, we can grab the output of the URL of the ECR repo by using terraform output
command. Create an env file to store the value of the URL of the ECR repo.
script:
- terraform init -backend-config="bucket=${TF_S3_BUCKET}" -backend-config="region=${AWS_DEFAULT_REGION}"
- terraform apply --auto-approve
- touch build.env
- export ECR_REPO=$(terraform output -raw repository_url)
- echo "ECR_REPO=$ECR_REPO" >> build.env
With the environment file created, we need to pass it to the next stage in the pipeline. We can pass the file as an artifact with an expiration time of 1 hour.
script:
- terraform init -backend-config="bucket=${TF_S3_BUCKET}" -backend-config="region=${AWS_DEFAULT_REGION}"
- terraform apply --auto-approve
- touch build.env
- export ECR_REPO=$(terraform output -raw repository_url)
- echo "ECR_REPO=$ECR_REPO" >> build.env
artifacts:
expire_in: 1 hour
paths:
- build.env
Now we can take a look at the next job and add some details to the build_image job. In order for the build_image job gets the artifacts from the previous job setup_ECR, we use the needs
keyword and state the job. We can also add the AWS export statements in the before_script.
build_image:
stage: build
image: registry.gitlab.com/<PROJECT>/<REPO>/podman:aws
needs:
- job: setup_ECR
before_script:
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
script:
For us to use podman, the podman service needs to be started and the DOCKER_HOST needs to be exported. The build.env file with the ECR Repo also needs to be sourced. Finally, we use the aws ecr get-login-password
command to login to the registry.
before_script:
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- podman system service --time=0 unix:///tmp/podman.sock &
- export DOCKER_HOST=unix:///tmp/podman.sock
- source build.env
- REPO_HEADER=$(echo $ECR_REPO | cut -d '/' -f1)
- aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | podman login --tls-verify=false --username AWS --password-stdin $REPO_HEADER
One of the requirements for the script section is for your code or whatever you want to create a container image of needs to have a Dockerfile. That way podman can build your code. The Container Image needs to have a tag of ${ECR_REPO}:${CI_COMMIT_SHA} in order for it to be pushed to ECR.
script:
- podman build -t ${ECR_REPO}:${CI_COMMIT_SHA} -f <PATH TO DOCKERFILE>
- podman push --tls-verify=false ${ECR_REPO}:${CI_COMMIT_SHA}
Putting it all together, we get the following .gitlab-ci.yml file:
setup_ECR:
stage: pre
image: hashicorp/terraform
before_script:
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
script:
- terraform init -backend-config="bucket=${TF_S3_BUCKET}" -backend-config="region=${AWS_DEFAULT_REGION}"
- terraform apply --auto-approve
- touch build.env
- export ECR_REPO=$(terraform output -raw repository_url)
- echo "ECR_REPO=$ECR_REPO" >> build.env
artifacts:
expire_in: 1 hour
paths:
- build.env
build_image:
stage: build
image: registry.gitlab.com/<PROJECT>/<REPO>/podman:aws
needs:
- job: setup_ECR
before_script:
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- podman system service --time=0 unix:///tmp/podman.sock &
- export DOCKER_HOST=unix:///tmp/podman.sock
- source build.env
- REPO_HEADER=$(echo $ECR_REPO | cut -d '/' -f1)
- aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | podman login --tls-verify=false --username AWS --password-stdin $REPO_HEADER
script:
- podman build -t ${ECR_REPO}:${CI_COMMIT_SHA} -f <PATH TO DOCKERFILE>
- podman push --tls-verify=false ${ECR_REPO}:${CI_COMMIT_SHA}
Conclusion Link to heading
We covered the following:
- Creating an ECR Repo using Terraform
- Creating a custom Podman Image and pushing it to the Gitlab Container Registry
- Utilizing a gitlab pipeline to orchestrate creating an ECR Repo and pushing a custom image into the ECR Repo