Stop Suffering—Automate Terraform with GitLab CI/CD Before It Breaks You!
Ditch Manual Terraform Deployments—Build a Fully Automated GitLab CI/CD Pipeline Today!
🔥 The Pain of Manual Terraform Deployments
Deploying infrastructure manually with Terraform works—until it doesn’t.
Maybe you’ve been here:
You push a Terraform change to production, thinking "this should be fine," only to realize you forgot to run
terraform plan
first.You copy-paste configurations between environments and introduce a typo that breaks everything.
You make a quick fix in staging, but forget to apply the same change to production.
And let’s not even talk about collaborating across teams—your state files are a mess, and no one knows who applied what, when, or why.
Sound familiar? If so, you’re not alone.
🔹 Manual Terraform deployments are slow, error-prone, and painful.
🔹 There’s a better way.
🚀 The Solution? GitLab CI/CD + Terraform
What if every Terraform change was automatically tested, reviewed, and applied across multiple environments—without you ever having to touch the CLI?
With GitLab CI/CD, you can:
✅ Automate Terraform workflows (validate → plan → apply)
✅ Enforce approvals before deploying to production
✅ Use GitLab variables to manage environments securely
✅ Run Terraform on a self-hosted runner for full control
No more late-night debugging. No more “Oops, I applied to the wrong environment.” Just clean, predictable infrastructure deployments.
🔹 What You’ll Learn in This Guide
In this post, I’ll walk you through building a Terraform CI/CD pipeline in GitLab—step by step. We’ll cover:
🛠️ How to structure your GitLab pipeline for Terraform
🔄 How to handle multiple environments (dev, staging, prod)
🔐 How to run Terraform securely (hiding sensitive credentials, requiring approvals for production)
💡 A full example
.gitlab-ci.yml
pipeline that you can use today
By the end of this guide, you’ll have a fully automated Terraform pipeline running in GitLab CI/CD—and you’ll never want to deploy manually again.
Let’s dive in. 🚀
🚀 Why Automate Terraform with GitLab CI/CD?
Manual Terraform deployments aren’t just frustrating—they’re risky. Every time you type terraform apply
by hand, you introduce the possibility of:
❌ Typos that break infrastructure
❌ Inconsistent environments (dev ≠ staging ≠ prod)
❌ Forgotten state files leading to drift
❌ Accidental production changes (we've all been there 😅)
🔹 Automation Fixes This. Here’s How:
✅ 🚀 Faster, More Reliable Deployments
CI/CD runs Terraform automatically—no more waiting on engineers to apply changes manually.
Pipelines deploy consistently across all environments, reducing "it worked on my machine" issues.
✅ 🔄 Fewer Human Errors, More Predictability
GitLab CI/CD validates your Terraform code before applying changes.
Manual approvals for production ensure you never apply mistakes to prod by accident.
✅ 🔐 Security & Auditability
No more exposing credentials—secrets are managed securely in GitLab variables.
Every Terraform change is logged, tracked, and reviewed in merge requests.
🔹 Bottom line? GitLab CI/CD turns Terraform from a fragile, manual process into a fast, reliable, and secure workflow.
Now, let’s break down how the pipeline actually works. 🔧
🔍 Breaking Down the Terraform GitLab CI/CD Pipeline
Terraform automation is all about breaking deployments into logical stages and ensuring everything runs in the right order, under the right conditions.
🚀 1. Terraform Pipeline Stages
A good Terraform pipeline follows a structured flow to validate, preview, apply, and (optionally) destroy infrastructure. Here’s how each stage works:
🔹 Stage 1: Validate (Catch Errors Early)
📌 Purpose: Ensures Terraform code is formatted correctly and syntactically valid before running a plan or apply.
✅ Runs:
terraform fmt -check
terraform validate
✅ Why It Matters:
Prevents bad code from reaching production.
Saves time by catching formatting and syntax issues upfront.
Ensures Terraform follows best practices before running a plan.
🔹 Stage 2: Plan (Preview Changes Before Applying)
📌 Purpose: Generates an execution plan showing what Terraform will modify without actually changing anything.
✅ Runs:
terraform plan -out=tfplan
✅ Why It Matters:
Prevents accidental destruction of resources.
Gives engineers a chance to review changes before they happen.
Stores the execution plan (
tfplan
) as an artifact for the next stage.
🔹 Stage 3: Apply (Deploy Infrastructure)
📌 Purpose: Actually applies Terraform changes from the previously generated plan.
✅ Runs:
terraform apply -auto-approve tfplan
✅ Why It Matters:
Ensures only the planned changes are applied, reducing surprises.
Uses GitLab’s manual approval feature to prevent accidental prod deployments.
Keeps infrastructure consistent across environments.
🔹 (Optional) Stage 4: Destroy (Tearing Down Resources)
📌 Purpose: Destroys infrastructure when needed (e.g., for temporary environments or cleanup).
✅ Runs:
terraform destroy -auto-approve
✅ Pros:
Useful for dev/test environments that don’t need to persist.
Prevents orphaned resources that cost money.
❌ Cons:
Accidental execution can be catastrophic if applied in prod.
Should only be available under controlled conditions (manual trigger).
💡 Best Practice:
Enable
destroy
only for non-prod environments.Require manual approval before running.
🛠️ 2. Pipeline Variables & before_script
GitLab CI/CD allows you to store Terraform configurations using variables.
🔹 Variables (Why They Matter)
Define commonly used values once so they don’t have to be repeated in every job.
variables:
TF_VERSION: "1.10.5"
TF_ROOT: "./terraform"
✅ Why Use Default Variables?
Ensures consistency across all jobs.
Makes it easier to upgrade Terraform versions in one place.
🔹 before_script
(Why It’s Critical)
📌 Purpose: Installs required dependencies before Terraform commands run.
✅ Example:
before_script:
- apt-get update && apt-get install -y unzip python3-pip
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- unzip awscliv2.zip && ./aws/install
- curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
- chmod +x kubectl && mv kubectl /usr/local/bin/
- curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- pip3 install jinja2-cli
✅ Why It Matters:
Ensures Terraform, AWS CLI, Kubectl, and Helm are available before the job starts.
Prevents failed pipelines due to missing dependencies.
🛠️ 3. Workflow & Stage Rules (Controlling When Jobs Run)
By default, GitLab CI/CD runs pipelines on every commit—but that’s not always what we want.
🔹 Workflow Rules: Controlling When the Pipeline Runs
These rules define when GitLab should trigger a pipeline.
✅ Example:
workflow:
rules:
- if: '$CI_COMMIT_TAG' # Never run on commit tags
when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"' # Run on Merge Requests
when: always
- if: '$CI_COMMIT_BRANCH' # Run on any branch
when: always
✅ Why Use Workflow Rules?
Prevents unnecessary pipeline runs on tags.
Ensures the pipeline always runs on MRs and branches.
🔹 Stage Rules: Controlling When Specific Jobs Run
Stage rules determine when individual jobs execute.
✅ Example:
apply:
stage: apply
script:
- terraform apply -auto-approve tfplan
when: manual # Requires manual approval before applying changes
only:
- main # Only allow apply in the main branch
needs:
- plan # Ensure apply only runs if plan succeeds
✅ Why Use Stage Rules?
Prevents production changes without approval.
Ensures
apply
only runs ifplan
succeeds (avoids broken deployments).Allows
apply
only on themain
branch to enforce best practices.
🎯 Key Takeaways
✅ Terraform pipelines should follow a structured flow → validate
→ plan
→ apply
(optional destroy
).
✅ Use variables & before_script
to keep pipelines clean and maintainable.
✅ Workflow & stage rules prevent unnecessary runs and protect production.
Right now, we have a solid Terraform pipeline that validates, plans, and applies changes. But there’s a big missing piece—how do we handle multiple environments?
Dev, Staging, and Production all need their own deployments, but we don’t want to copy-paste Terraform configurations.
We need a structured way to deploy to different environments without manual intervention.
Terraform Workspaces, GitLab’s Parallel Matrix, and Stage Dependencies can solve this problem.
🔥 It’s about to get even more powerful!
🔄 Handling Multiple Environments in GitLab CI/CD
When managing infrastructure, we often need to deploy the same Terraform code to multiple environments—like dev, staging, and production.
A bad approach would be to copy and paste the same Terraform code across different repositories or branches. 😵💫
A better approach is to use:
✅ Terraform Workspaces → To separate environments within the same code base
✅ GitLab’s Parallel Matrix → To deploy multiple environments in parallel
Let’s break this down. ⬇️
🔹 Terraform Workspaces: One Codebase, Multiple Environments
📌 What are Terraform Workspaces?
Terraform Workspaces allow you to manage multiple instances of your infrastructure without duplicating code. Each workspace has:
A separate Terraform state but shares the same
.tf
filesIsolated infrastructure configurations (e.g., different instance sizes, regions, etc.)
No need to maintain multiple repositories or directories for each environment
💡 Example:
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
When running Terraform commands, you select the appropriate workspace:
terraform workspace select dev
terraform apply -auto-approve
Terraform automatically stores separate state files per workspace, ensuring that changes in dev
don’t affect staging
or prod
.
✅ Why Use Workspaces?
Each workspace has its own isolated state. No need for duplicated code or multiple Terraform directories
Keeps infrastructure consistent across environments
Reduces duplication and human error
🔹 Automating Workspaces with GitLab’s Parallel Matrix
📌 How do we deploy multiple environments in GitLab CI/CD?
Instead of writing separate jobs for each environment, we can dynamically deploy them all using GitLab’s Parallel Matrix feature.
🔹 Defining a Parallel Matrix in .gitlab-ci.yml
GitLab allows us to define a matrix of Terraform jobs, each targeting a different workspace.
💡 Example:
stages:
- validate
- plan
- apply
deploy:
stage: apply
parallel:
matrix:
- WORKSPACE: ["dev", "staging", "prod"]
script:
- terraform workspace select $WORKSPACE || terraform workspace new $WORKSPACE
- terraform apply -auto-approve tfplan
needs:
- plan
when: manual # Requires manual approval for production
✅ How This Works:
1️⃣ GitLab runs a separate job for each environment (dev
, staging
, prod
)
2️⃣ The job selects the correct Terraform Workspace
3️⃣ Each job applies Terraform in parallel
🔹 Enforcing Environment Dependencies in GitLab CI/CD
To ensure we don’t deploy to production unless staging succeeds, we use needs:
in our .gitlab-ci.yml
file.
💡 Example: Ensuring prod
Only Deploys After staging
stages:
- validate
- plan
- apply
deploy:
stage: apply
parallel:
matrix:
- WORKSPACE: ["dev"]
- WORKSPACE: ["staging"]
needs: ["deploy:dev"] # Only runs if dev succeeds
- WORKSPACE: ["prod"]
needs: ["deploy:staging"] # Only runs if staging succeeds
when: manual # Requires approval before running
script:
- terraform workspace select $WORKSPACE || terraform workspace new $WORKSPACE
- terraform apply -auto-approve tfplan
needs:
- plan # Ensures plan runs before apply
✅ Why This Matters:
Prevents accidental production deployments
Ensures a structured promotion path (Dev → Staging → Prod)
🎯 Key Takeaways
✅ Terraform Workspaces let us manage multiple environments without duplicating code
✅ GitLab’s Parallel Matrix runs Terraform for multiple environments at the same time
✅ GitLab needs:
ensures prod only deploys after staging succeeds
🚀 Now, our Terraform deployments are fully automated across all environments!
The next challenge is security.
🔥 This is where things get serious!
🔐 Running Terraform Securely in GitLab CI/CD
When automating Terraform, security should be a top priority.
Secrets should never be hardcoded in
.gitlab-ci.yml
or Terraform code.Production deployments should never be automatic.
Let’s see how this works. ⬇️
🔹 Hiding Sensitive Variables in GitLab CI/CD
Instead of hardcoding credentials in .gitlab-ci.yml
, we use GitLab’s masked CI/CD variables:
1️⃣ Go to your GitLab project → Settings > CI/CD > Variables
2️⃣ Create variables for sensitive values, such as:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
TF_VAR_database_password
3️⃣ Mark them as:
✅ Masked (prevents exposure in logs)
✅ Protected (prevents access in unprotected branches)
💡 Using Secure Variables in Your Pipeline
Once stored in GitLab, these variables can be accessed securely in the pipeline:
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
TF_VAR_database_password: $TF_VAR_database_password
✅ Why This Matters:
Prevents secrets from leaking into logs.
Ensures only authorized branches can access sensitive data.
Keeps Terraform configurations clean and maintainable.
🚨 Never store secrets in terraform.tfvars
or hardcoded in .tf
files!
🔹 Approvals Before Production Deployments
📌 Why? Mistakes happen. Without safeguards, a misconfigured Terraform change could wipe out production.
💡 Solution: Require manual approvals before applying changes to production.
🔹 Configuring Manual Approvals in .gitlab-ci.yml
In our GitLab pipeline, we’ll enforce manual approval before running terraform apply
in production.
apply:
stage: apply
script:
- terraform apply -auto-approve tfplan
when: manual # Requires manual approval before running <--- ******
only:
- main # Only allow apply in the main branch
environment:
name: production
url: https://prod.example.com
✅ How This Works:
Terraform runs automatically for dev/staging, but requires manual approval for production.
Only changes merged into
main
can be deployed to prod.GitLab provides an approval button so an engineer can review and confirm before applying changes.
🔹 Additional Best Practices for Security
🔐 Use IAM Roles Instead of Static AWS Keys
Instead of storing
AWS_ACCESS_KEY_ID
, use IAM roles with GitLab’s runner for temporary credentials.Prevents long-lived secrets from being compromised.
🔐 Limit Who Can Approve Production Deployments
Use GitLab protected environments to restrict approvals to specific users or groups.
Prevents unauthorized changes to prod.
🔐 Enable Logging & Audit Trails
Enable Terraform Cloud’s state locking or use S3 + DynamoDB for tracking changes.
Ensure all Terraform runs are logged to track who deployed what, when, and why.
🎯 Key Takeaways
✅ Use GitLab CI/CD variables to store secrets securely (never hardcode credentials).
✅ Require manual approvals before applying Terraform changes to production.
✅ Restrict production deployments using GitLab’s protected environments.
✅ Use IAM roles instead of long-lived AWS keys whenever possible.
Now that we’ve secured our pipeline, it’s time to put everything together.
Let’s get to it! 🚀
🔮 Inside the Code: How the Magic Happens
Now that we’ve covered Terraform automation, multiple environments, and security best practices, let's put it all together in a full .gitlab-ci.yml
pipeline. This pipeline will:
Set our global variables
Run our before script to install the necessary tools and authenticate to AWS
Define our stages and workflow rules
Validate Terraform code before deployment
Generate a Terraform plan so we can review changes
Deploy our infrastructure automatically to
dev
andstaging
Require manual approval before production deployments
Use GitLab’s parallel matrix to handle multiple environments
🟢 Let’s break it down step by step.
⚠️ Quick Note:
🛠️ GitLab offers a powerful and flexible pipeline configuration, with countless ways to design your workflows. The approach I’m sharing is just one example—designed to highlight key principles I find essential. Your own pipeline will depend on your organization’s business needs and technical requirements, so adapt accordingly! 📊
For a full list of configuration options, see the GitLab CI/CD syntax reference
1️⃣ Global Variables 🌍
This code section is used to define essential variables for consistency and flexibility through the pipeline.
variables:
TF_ROOT: $CI_PROJECT_DIR
TF_PLAN: plan.tfplan
TF_VERION: ${TF_VERSION}
ACTION:
description: "The action to perform for the pipeline."
value: "apply"
options:
- "apply"
- "destroy"
TF_ROOT
🗂️ is set to$CI_PROJECT_DIR
, the root directory of the GitLab project.TF_PLAN
📜 is set toplan.tfplan
, the filename for storing Terraform execution plans.TF_VERSION
🚀 is dynamically set to${TF_VERSION}
, ensuring the correct Terraform version is used.ACTION
⚙️ is a configurable variable that determines whether the pipeline runs an apply (default) or destroy operation, keeping deployments flexible.
⚠️ Quick Note: The $VARIABLE
syntax denotes predefined GitLab variables, whereas ${VARIABLE}
is used for custom-defined ones.
2️⃣ Default Configuration ⚙️
This next bit of code sets the stage with image
, before_script
, and cache
.
default:
# use base lightweight terraform image
image:
name: hashicorp/terraform:$TF_VERSION
entrypoint: [""] # Override the default Terraform entrypoint to allow shell commands
# perform these actions prior to running pipeline stages
before_script:
# Create a cache directory to store tool binaries
- mkdir -p /cache/bin
- export PATH="/cache/bin:$PATH"
# Restore cached binaries if available: aws, kubectl, helm, jq, jinja2
- if [ -f "/cache/bin/aws" ]; then echo "Using cached AWS CLI"; else curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install --bin-dir /cache/bin; fi
- if [ -f "/cache/bin/kubectl" ]; then echo "Using cached kubectl"; else curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv kubectl /cache/bin/; fi
- if [ -f "/cache/bin/helm" ]; then echo "Using cached Helm"; else curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash && mv /usr/local/bin/helm /cache/bin/; fi
- if [ -f "/cache/bin/jq" ]; then echo "Using cached jq"; else apt-get update && apt-get install -y jq && cp $(which jq) /cache/bin/; fi
- if [ -f "/cache/bin/jinja2" ]; then echo "Using cached Jinja2"; else apt-get update && apt-get install -y python3-pip && pip3 install jinja2-cli && cp $(which jinja2) /cache/bin/; fi
# Verify tools are installed
- aws --version && kubectl version --client && helm version && jq --version
# Authenticate to AWS using masked and protected environment variables from GitLab CI/CD
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID"
- aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY"
- aws configure set region "$AWS_DEFAULT_REGION"
# Validate authentication
- aws sts get-caller-identity
# Initialize terraform
- terraform init -input=false
cache:
key: tools-cache
paths:
- /cache/bin # Store installed binaries for reuse
The default:
section in GitLab CI/CD defines settings that apply to all jobs in the pipeline, reducing redundancy and ensuring consistency. Placing image:
, before_script:
, and cache:
under default:
means:
image:
🖥️ specifies the base Docker image for all jobs unless overridden, keeping dependencies aligned.I chose the
hashicorp/terraform:$TF_VERSION
image because it's lightweight, comes with Terraform preinstalled, and is easily configurable, allowing me to add any additional tools as needed.This makes it ideal for POCs and quick iterations, but for enterprise-level implementations, it's recommended to use a custom image tailored to organizational security and requirements. 🚀
before_script:
🏁 runs setup commands before each job, ensuring a consistent environment.- I optimize the pipeline by caching tool binaries to avoid redundant downloads, restoring or installing missing binaries, authenticating to AWS 🔑, and initializing Terraform 🏗️—ensuring each job starts in a ready-to-go state. 🚀
cache:
📦 speeds up builds by reusing files across jobs, improving efficiency.
This approach keeps the pipeline clean, DRY (Don’t Repeat Yourself), and easier to maintain. 🚀
3️⃣ Pipeline Stages 🏗️
Now I lay out the sequence of execution.
stages:
- validate
- plan
- apply
- destroy
4️⃣ Workflow Rules 🛂
I add workflow rules to control when and how the pipeline runs.
workflow:
rules:
# Never run on commit tags
- if: '$CI_COMMIT_TAG'
when: never
# Always run on MRs
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: always
# Run on any branch
- if: '$CI_COMMIT_BRANCH'
when: always
# Skip pipeline if [skip ci] is in commit message
- if: '$CI_COMMIT_MESSAGE =~ /.*\[skip ci\].*/i'
when: never
My workflow rules ensure the pipeline runs only when it's needed:
🚫 Skip commit tags – A commit tag is a reference to a specific point in the repo’s history (often used for versioning). Since tags usually represent finalized versions, skipping them avoids unnecessary pipeline runs.
🔄 Run on Merge Requests (MRs) – The pipeline automatically triggers when an MR is submitted, ensuring changes are validated before merging.
🌿 Run on all branches – This allows new feature branches to be tested early, ensuring a valid Terraform plan before submitting an MR.
⏭️ Allow skipping for non-code changes – Small adjustments to non-code files like changelogs and READMEs shouldn't trigger the pipeline, keeping runs efficient.
This setup balances efficiency and flexibility, making sure critical changes are tested without wasting resources. 🚀
5️⃣ Deployment Matrix 🕵️♂️
This next section of code dynamically configures GitLab’s deployment matrix, enabling multiple environments and workspaces to run in parallel. 🚀
.deploy_matrix:
parallel:
matrix:
- ENV:
- ""
REGION:
- ""
I define the default deployment matrix as a hidden job to provide a flexible, customizable structure for different projects and application teams. 🛠️
While some teams may only need dev and prod, others might require dev, test, staging, and prod. By using a hidden job, each team can override the default matrix and tailor their environment setup, ensuring the pipeline adapts to their specific deployment needs without modifying the core pipeline logic. 🚀
6️⃣ Reusable Configurations ♻️
This next bit of code also makes use of GitLab’s hidden jobs to create reusable templates for the .plan
, .apply
, and .destroy
stages.
# Reusable plan stage configurations
.plan:
needs:
- validate
stage: plan
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform workspace select $WORKSPACE_NAME || terraform workspace new $WORKSPACE_NAME
- terraform plan -out=${WORKSPACE_NAME}-${TF_PLAN} -var-file=tfvars/${WORKSPACE_NAME}.tfvars -input=false
interruptible: false
artifacts:
name: plan-${WORKSPACE_NAME}
paths:
- ${WORKSPACE_NAME}-${TF_PLAN}
# Reusable apply stage configurations
.apply:
needs:
- plan
stage: apply
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform apply -input=false ${WORKSPACE_NAME}-${TF_PLAN}
interruptible: false
environment:
name: ${WORKSPACE_NAME}
when: manual
# Reusable destroy stage configurations
.destroy:
needs:
- validate
stage: destroy
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform destroy -var-file=tfvars/${WORKSPACE_NAME}.tfvars --auto-approve
interruptible: false
environment:
name: ${WORKSPACE_NAME}-destroy
when: manual
In GitLab CI/CD, hidden jobs (prefixed with a dot, like .plan
, .apply
, and .destroy
) are not executed directly but serve as reusable templates for other jobs. I like to use them to centralize shared configurations, ensuring consistency across multiple stages while keeping the pipeline clean, modular, and easy to maintain. 🚀
Each hidden job is structured to ensure a controlled and reliable pipeline flow:
Dependencies Managed ⏳ – The
needs:
header ensures jobs only run after required stages succeed.Stage Assignment 🎬 – Each job is assigned to the appropriate stage using the
stage:
header.Local Variables 🔧 – Jobs define their own local variables for flexibility.
Execution Steps 📜 – Each job specifies its own commands in the
script:
section.Manual Execution for Apply & Destroy ⏯️ –
when: manual
requires human approval before applying or destroying infrastructure.- Protected Job Approvals 🔐 – Repo-level approval blocks add an extra security layer.
Statefile Protection 🛑 –
interruptible: false
prevents job cancellations from corrupting the Terraform statefile.
This structure keeps the pipeline modular, secure, and resilient while ensuring proper execution flow. 🚀
7️⃣ Stage Definitions 🎬
Finally we define the key execution phases: validate, plan, apply, and destroy including any job level rules.
validate:
stage: validate
script:
- cd ${TF_ROOT}
- terraform fmt -check
- terraform validate
plan:
extends:
- .plan
- .deploy_matrix
rules:
# Do not run if action is destroy
- if: $ACTION == "destroy"
when: never
apply:
extends:
- .apply
- .deploy_matrix
rules:
# Do not run if action is destroy
- if: $ACTION != "destroy"
when: never
# Do not run on MRs
- if: $CI_MERGE_REQUEST_IID
when: never
# Only run on Main branch and ACTION == apply
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $ACTION == "apply"
destroy:
extends:
- .destroy
- .deploy_matrix
rules:
# Do not run on MRs
- if: $CI_MERGE_REQUEST_IID
when: never
# Only run on Main branch and ACTION == destroy
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $ACTION == "destroy"
# Only run if environment == ${WORKSPACE_NAME}-destroy
- if: $CI_ENVIRONMENT_NAME == "${WORKSPACE_NAME}-destroy"
Validate ✅ – Runs Terraform code checks using
terraform fmt
andterraform validate
to ensure formatting and syntax compliance.Plan 📜 – Extends the hidden
.plan
job, inheriting its configurations while dynamically applying the deployment matrix for all passed-in workspaces. It runs for all pipeline executions except whenACTION
is set todestroy
.Apply 🚀 – Extends the hidden
.apply
job, inheriting its configurations and applying the deployment matrix for all passed-in workspaces. The rules ensure it:❌ Does not run if
ACTION
isdestroy
❌ Does not run on MRs (since changes aren’t approved yet)
✅ Runs only from the
main
branch and whenACTION
isapply
Destroy 💥 – Extends the hidden
.destroy
job, inheriting its configurations and the deployment matrix for passed-in workspaces. The rules ensure it:❌ Does not run on MRs (since destruction must be explicitly approved)
✅ Runs only from the
main
branch✅ Only executes when
ACTION
isdestroy
✅ Only runs when
CI_ENVIRONMENT_NAME
matches${WORKSPACE_NAME}-destroy
This structured approach keeps the pipeline flexible, secure, and automated, ensuring Terraform actions only run when intended. 🚀
Bringing It All Together 🏗️
We’ve broken down each section of the pipeline, covered the logic behind its design, and explored the key configurations that make it efficient, flexible, and secure. Now, it’s time to see everything in action.
Below is the full GitLab CI/CD pipeline file—a blueprint that ties together the stages, workflow rules, hidden jobs, and deployment matrix into a powerful, automated Terraform workflow. 🚀
Whether you’re running a simple POC or scaling up for enterprise deployments, this pipeline serves as a strong foundation that you can adapt to your organization’s needs.
💡 Pro Tip: Use this as a starting point, but don't be afraid to tweak, refine, and optimize based on your specific requirements!
Now, let’s dive into the complete file:
variables:
TF_ROOT: $CI_PROJECT_DIR
TF_PLAN: plan.tfplan
TF_VERION: ${TF_VERSION}
ACTION:
description: "The action to perform for the pipeline."
value: "apply"
options:
- "apply"
- "destroy"
default:
# use base lightweight terraform image
image:
name: hashicorp/terraform:$TF_VERSION
entrypoint: [""] # Override the default Terraform entrypoint to allow shell commands
# perform before script actions prior to running pipeline stages
before_script:
# Create a cache directory to store tool binaries
- mkdir -p /cache/bin
- export PATH="/cache/bin:$PATH"
# Restore cached binaries if available: aws, kubectl, helm, jq, jinja2
- if [ -f "/cache/bin/aws" ]; then echo "Using cached AWS CLI"; else curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install --bin-dir /cache/bin; fi
- if [ -f "/cache/bin/kubectl" ]; then echo "Using cached kubectl"; else curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv kubectl /cache/bin/; fi
- if [ -f "/cache/bin/helm" ]; then echo "Using cached Helm"; else curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash && mv /usr/local/bin/helm /cache/bin/; fi
- if [ -f "/cache/bin/jq" ]; then echo "Using cached jq"; else apt-get update && apt-get install -y jq && cp $(which jq) /cache/bin/; fi
- if [ -f "/cache/bin/jinja2" ]; then echo "Using cached Jinja2"; else apt-get update && apt-get install -y python3-pip && pip3 install jinja2-cli && cp $(which jinja2) /cache/bin/; fi
# Verify tools are installed
- aws --version && kubectl version --client && helm version && jq --version
# Authenticate to AWS using environment variables from GitLab CI/CD
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID"
- aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY"
- aws configure set region "$AWS_DEFAULT_REGION"
# Validate authentication
- aws sts get-caller-identity
# Initialize terraform
- terraform init -input=false
cache:
key: tools-cache
paths:
- /cache/bin # Store installed binaries for reuse
stages:
- validate
- plan
- apply
- destroy
workflow:
rules:
# Never run on commit tags
- if: '$CI_COMMIT_TAG'
when: never
# Always run on MRs
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: always
# Run on any branch
- if: '$CI_COMMIT_BRANCH'
when: always
# Skip pipeline if [skip ci] is in commit message
- if: '$CI_COMMIT_MESSAGE =~ /.*\[skip ci\].*/i'
when: never
########################
### Resuable Configs ###
########################
# Set up default deployment matrix for use in parallel workflows
# This allows different deployments with custom matrices
.deploy_matrix:
parallel:
matrix:
- ENV:
- ""
REGION:
- ""
# Reusable plan job configurations
.plan:
needs:
- validate
stage: plan
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform workspace select $WORKSPACE_NAME || terraform workspace new $WORKSPACE_NAME
- terraform plan -out=${WORKSPACE_NAME}-${TF_PLAN} -var-file=tfvars/${WORKSPACE_NAME}.tfvars -input=false
interruptible: false
artifacts:
name: plan-${WORKSPACE_NAME}
paths:
- ${WORKSPACE_NAME}-${TF_PLAN}
# Reusable apply job configurations
.apply:
needs:
- plan
stage: apply
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform apply -input=false ${WORKSPACE_NAME}-${TF_PLAN}
interruptible: false
environment:
name: ${WORKSPACE_NAME}
when: manual
# Reusable destroy job configurations
.destroy:
needs:
- validate
stage: destroy
variables:
WORKSPACE_NAME: ${ENV}-${REGION}
script:
- cd ${TF_ROOT}
- terraform destroy -var-file=tfvars/${WORKSPACE_NAME}.tfvars --auto-approve
interruptible: false
environment:
name: ${WORKSPACE_NAME}-destroy
when: manual
#######################
### Job Definitions ###
#######################
validate:
stage: validate
script:
- cd ${TF_ROOT}
- terraform fmt -check
- terraform validate
plan:
extends:
- .plan
- .deploy_matrix
rules:
# Do not run if action is destroy
- if: $ACTION == "destroy"
when: never
apply:
extends:
- .apply
- .deploy_matrix
rules:
# Do not run if action is destroy
- if: $ACTION != "destroy"
when: never
# Do not run on MRs
- if: $CI_MERGE_REQUEST_IID
when: never
# Only run on Main branch and ACTION == apply
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $ACTION == "apply"
destroy:
extends:
- .destroy
- .deploy_matrix
rules:
# Do not run on MRs
- if: $CI_MERGE_REQUEST_IID
when: never
# Only run on Main branch and ACTION == destroy
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $ACTION == "destroy"
# Only run if environment == ${WORKSPACE_NAME}-destroy
- if: $CI_ENVIRONMENT_NAME == "${WORKSPACE_NAME}-destroy"
Wrapping It Up 🎬
This pipeline is powerful, flexible, and ready to streamline your Terraform deployments—but it won’t run without a properly configured GitLab Runner. 🏃♂️ In my next post, I’ll walk you through setting up a runner and the infrastructure needed to support it, so you can take this pipeline from theory to execution.
🎥 Prefer a walkthrough of my blog content? Watch my YouTube breakdown, where I walk through everything step by step—code, pipeline elements, and best practices. Check it out! 🚀
📂 You can also view the full code repository here!
What’s Next?
This is just the beginning of building an optimized GitLab CI/CD workflow. Coming up next, we’ll cover:
✅ GitLab Runner setup – Getting your pipeline ready to execute.
✅ Infrastructure for supporting the runner – Ensuring smooth execution at scale.
✅ Configuring the Terraform backend for statefile management – Keeping your infrastructure state secure and consistent.
✅ More DevOps best practices – Making your pipeline even smarter and more efficient.
Let’s Keep the Conversation Going! 🔥
Did this pipeline breakdown help? Got questions or tweaks you're thinking about? Drop a comment below or connect with me on LinkedIn—I’d love to hear how you’re applying these concepts!
If you’re enjoying this content, share it with your team, give it a like, and stay tuned for more deep dives into cloud automation and DevOps best practices. 🚀
Until next time—build boldly, automate relentlessly, and let DevOps do the heavy lifting! 🔥🤖