Terraform & Azure — GitLab CI

Using Terraform and GitLab CI to create a simple infrastructure-as-code (IaC) pipeline.

C.J. Shields
6 min readAug 31, 2020

In this lab I’ll be using GitLab to create a Terraform Pipeline. GitLab is a web-based DevOps lifecycle tool that provides a Git-repository manager providing wiki, issue-tracking and continuous integration/continuous deployment pipeline features, using an open-source license

Azure Subscription Prep

Log into your azure portal and search for App Registrations and select New Registration. Name your new app registration and proceed on

Create New Registration

Once completed, you’ll be redirected to the settings. Navigate to Certificates & Secrets on the left panel and select New Client Secret. Save this secret somewhere, we’ll need this later.

Create Azure Storage

In your azure portal, navigate to Storage Accounts and create yourself a basic storage account.

After you storage account had been created. Navigate to Containers and create a blob container for your storage account as listed below. Take note of your access keys, you’ll need them for later. Terraform will not create a container for you, it has to match an existing container.

Create a project in Gitlab

I created myself a free trial for GitLab to complete this next portion. I created a new repository with the following files:

  • main.tf: The resources we wish to create.
  • variables.tf: Variables to use in our configuration language.
  • outputs.tf: Information you wish to output after the run.
  • providers.tf: Define which providers (e.g. Azure, AWS) you need.
  • .gitlab-ci.yml: Configuration that dictates your pipeline’s workings.

The repository will contain our Terraform files and the definition of our pipeline. You can configure your layout however you want, but for now we’ll use this structure

Creating the VM’s

These next steps I used Visual Studio Code as my editor.

Variables

I created myself a few variables to make my process a bit easier. I created myself a public/private key so I could use this for my variables.tf file

variable "VM_NAME" {
default = "TestVM"
}

variable "VM_ADMIN" {
default = "azure-admin"
}

variable "LOCATION" {
default = "West Europe"
}

variable "DEFAULT_SSHKEY" {
default = "yourpublicsshkey"
}

Providers.tf file

Next I created my providers.tf file using Azure as my provider

# Azure Provider
provider "azurerm" {
version = "=2.0.0"
features {}
}

# State Backend
terraform {
backend "azurerm" {
resource_group_name = "yourresourcegroup"
storage_account_name = "yourstorageaccount"
container_name = "terraform-state"
key = "test.terraform.tfstate"
}
}

Main.tf file

A Virtual Machine on Azure has several prerequisites that need to exist:

  • Resource Group
  • Virtual Network
  • Subnet inside a Virtual Network
  • Network Interface
  • Public IP (optional)

Since VM_Name was used a few times, the variable fit perfect and made things easier for this set up.

# Create a Resource Group
resource "azurerm_resource_group" "main" {
name = "${var.VM_NAME}-ResourceGroup"
location = var.LOCATION
}

# Create a Virtual Network
resource "azurerm_virtual_network" "main" {
name = "${var.VM_NAME}-network"
address_space = ["10.0.1.0/24"]
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
}

# Create a Subnet
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefix = "10.0.1.0/28"
}

# Create a Virtual Machine
resource "azurerm_virtual_machine" "main" {
name = var.VM_NAME
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_B1ms"

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}

storage_os_disk {
name = "${var.VM_NAME}-OS"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "StandardSSD_LRS"
disk_size_gb = 32
}

os_profile {
computer_name = var.VM_NAME
admin_username = var.VM_ADMIN
}

os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
key_data = var.DEFAULT_SSHKEY
path = "/home/azure-admin/.ssh/authorized_keys"
}
}

tags = {
environment = "test"
deployment = "terraform"
}
}

# Create a Network Interface
resource "azurerm_network_interface" "main" {
name = "${var.VM_NAME}-nic01"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name

ip_configuration {
name = "ipconfiguration01"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.main.id
}
}

# Create a Public IP
resource "azurerm_public_ip" "main" {
name = "${var.VM_NAME}-publicip01"
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
allocation_method = "Static"
}

Credentials

With everything in place ,our GitLab repository still needs to communicate to our Azure subscription. Navigate to Settings > CI/CD > expand Variables. I retrieved these next items from my azure subscription and entered them into each variable.

  • ARM_ACCESS_KEY: The Access Key we got from the Storage Account.
  • ARM_CLIENT_ID: The Client ID we got from the App Registration.
  • ARM_CLIENT_SECRET: The Secret we’ve created in the App Registration.
  • ARM_SUBSCRIPTION_ID: Your Subscription’s ID.
  • ARM_TENANT_ID: The Tenant ID we got from the App Registration.
  • TF_VAR_DEFAULT_SSHKEY: Your public SSH key.

Creating the configuration

To start rolling out resources automatically, I needed to create a configuration that defined what that looked like. It must be named exactly .gitlab-ci.ym and placed in the root of your project, or it will not be picked up by GitLab as a valid configuration. Here is short documentation I used to help assist me with CI/CD in GitLab.

image:
name: hashicorp/terraform:0.12.29
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

before_script:
- rm -rf .terraform
- terraform --version
- terraform init

stages:
- validate
- plan
- apply

validate:
stage: validate
script:
- terraform validate

plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- ./planfile

apply:
stage: apply
script:
- terraform apply -input=false "planfile"
only:
- master
dependencies:
- plan

This does a few things:

  1. image: downloads the Terraform Docker image, to run all code from.
  2. before_script: runs a few preparatory lines of script and outputs the Terraform version to the logs.
  3. validate: runs terraform validate, which does a quick check on the syntax of your Terraform HCL. If it contains any errors, the pipeline will stop and show you the error.
  4. plan: runs terraform plan, which compares the current state against your current environment. artifact: will make the plan file available for download/debugging.
  5. apply: runs terraform apply, which prompts Terraform to reconcile the difference between the state and the current infrastructure. This can create, change, replace, or destroy resources. This step is only performed when the master branch is changed.

Deployment

Once my configuration was completed, I ran my first job. Once the validate completed, the plan, then apply ran my Terraform plan

Once finished, I could check my Azure portal for my test VM. I seen all of my resources created from my Terraform file. Once this project was complete, to reduce cost, I destroyed all of my resources. Thanks for following!

--

--

C.J. Shields
C.J. Shields

Written by C.J. Shields

Systems/Network Administrator | DevOps Enthusiast

No responses yet