First look into…Terraform Cloud

In an upcoming webinar with Grant Orchard from HashiCorp, we will be building and managing VMware infrastructure through Terraform Cloud. It gave me an opportunity to test this platform and see how it can work in particular with the VMware portfolio.

What is Terraform Cloud?

It’s in essence Terraform-As-A-Service (and frankly a way for HashiCorp to monetize some of the cool open-source stuff they’ve built in the past few years). It’s free to try – Terraform Cloud offers multiple tiers and most of what I tested is available on the Free Tier. There are features only available on the premium tiers like Sentinel which I will cover in an upcoming post.

If you’ve ready any of my previous blog posts on Terraform, you will see that Terraform:

  • Deploys and manages cloud resources while keeping a record of the deployed resources (the “state“, usually in a file called terraform.tfstate) and stores that state in a “back-end”. The Terraform back-end can be local on the client running Terraform or it can be in the Cloud (commonly, in an AWS S3 buckets.
  • Builds infrastructure based on code (written in HCL, a language that looks very similar to JSON), described in a configuration file.

While that works fine for a single user, Terraform can get messy with multiple users unless all users share the same state and configuration files (otherwise, you could end up in a race condition, conflicts or worse – it’s well explained here).

The benefits from using a platform like Terraform Cloud are that:

  • It provides an easy way to share Terraform state (to avoid the issues described earlier) and also secrets and other variables (more on that in a bit).
  • It links pretty seamlessly Terraform with a Version Control System like GitHub or GitLab. It means that, when the infrastructure needs updating, the code describing the infrastructure would be changed (through a “git commit”) and that triggers a “terraform apply”. The resources would then be updated according to the configuration and the existing state.
  • It means infrastructure (and its desired state) can be treated as code and be versioned accordingly.

Terraform Cloud also provides supports for policy-as-code using Sentinel (watch for an upcoming blog post on this), which is one of the features that require a premium account.

Let’s have a quick look through Terraform Cloud.

Organizations and Workspaces

Within Terraform Cloud, you would start by joining or creating an organization.

Then you would manage resources and infrastructure within a workspace. A workspace is linked to a code repo and would represent an infrastructure/environment.

For example, I created a RUNVMC organization and within that org, I have my workspace called “terraform-vmc-sentinel”.

Workspace and organization

Version Control System

When you create a workspace, you would typically link it up to a VCS (Version Control System), most likely a GitLab or a GitHub organization.

As you can see below, I connect it to my GitHub account and select the repository that I want to link to.

Linking TF workspace to GitHub repo

The repo will need an infrastructure file (main.tf file) that describes the resources Terraform will create and manage. In my case, I am going to use it for NSX-T (as described in a previous post) and VMware Cloud on AWS (see here to learn more).

To deploy a VMware Cloud on AWS SDDC, I will need the following in my main.tf:

provider "vmc" {
  refresh_token = var.vmc_token
  org_id        = var.org_id
}

# Empty data source defined in order to store the org display name and name in terraform state
data "vmc_org" "my_org" {
}

data "vmc_connected_accounts" "my_accounts" {
  account_number = var.aws_account_number
}

data "vmc_customer_subnets" "my_subnets" {
  connected_account_id = data.vmc_connected_accounts.my_accounts.id
  region               = var.sddc_region
}

resource "vmc_sddc" "sddc_1" {
  sddc_name           = "my_SDDC_1"
  vpc_cidr            = var.sddc_mgmt_subnet
  num_host            = 3
  provider_type       = "ZEROCLOUD"
# ZEROCLOUD is an API simulator we use internally - we can deploy fake SDDCs using the actual APIs instead of deploying on actual AWS hardware. Customers would use "AWS" as the provider_type instead. 
  region              = data.vmc_customer_subnets.my_subnets.region
  vxlan_subnet        = var.sddc_default
  delay_account_link  = true
  skip_creating_vxlan = true
  sso_domain          = "vmc.local"
  host_instance_type  = "I3_METAL"
  sddc_type           = ""
  # sddc_template_id = ""
  deployment_type = "SingleAZ"
  timeouts {
    create = "300m"
    update = "300m"
    delete = "180m"
  }
}

I also defined the variables in a variables.tf file:

variable "vmc_token" {
  description = "API token used to authenticate when calling the VMware Cloud Services API."
}
variable "org_id" {
  description = "Organization Identifier."
}
variable "aws_account_number" {
  description = "The AWS account number."
}
variable "sddc_name" {
  description = "Name of SDDC."
  default     = "Terraform-SDDC"
}
variable "sddc_region" {
  description = "The AWS region."
  default     = "EU_NORTH_1"
}
variable "sddc_mgmt_subnet" {
  default = "10.2.0.0/16"
}

variable "vpc_cidr" {
  description = "AWS VPC IP range. Only prefix of 16 or 20 is currently supported."
  default     = "172.31.0.0/20"
}
variable "sddc_default" {
  description = "VXLAN IP subnet in CIDR for compute gateway."
  default     = "10.10.10.0/23"
}

Understandably, we are not going to put the value of the variables (especially our passwords) in our GitHub repo… but fortunately, Terraform Cloud has a good place for them:

TF Cloud Variables

Some of the passwords and values are sensitive – even other users within the same workspace and organizations wouldn’t be able to access them.

And that’s all that’s needed! Terraform Cloud is now monitoring the GitHub repo and when commits, pushes and pulls are executed, it will execute terraform apply.

Terraform Cloud in action

Let’s walkthrough this in a quick video.

Let me explain:

  • I have my git files on my Mac synchronized on my Mac with my GitHub repo. Once I update the main.tf (with a simple SDDC name change), commit the change and push it upwards…
  • It triggers a “terraform apply”, to be executed from Terraform Cloud. In this case, terraform runs a “terraform plan”, realizes that I don’t have an SDDC deployed with that name yet and asks whether I want to deploy it or not (if I had selected the “Auto apply” option below, it would have deployed it automatically).
  • The SDDC is automatically deployed in VMware Cloud on AWS.

Another example below is with NSX-T. There, I update the Terraform NSX-T config directly from GitHub. I fix a minor typo in the description of a security group from “Nicco” to “Nico”. Like in the example below, when I commit the change, Terraform Cloud will automatically run a terraform apply. Once I approve it, Terraform uses its idempotency attributes to replace the typo (and only that).

There are a few other things worth nothing:

Terraform Version

My experience so far with TF Cloud is that it uses the latest TF version available and that might not for everyone. While Terraform 0.13.0 has come out recently, you might not want to use it or perhaps your scripts are not compatible with this version yet. In “Settings”, you can select which version of Terraform is used to execute your code.

State History and Versioning

This might be one of the best features around this – the changes between state versions. Here you can see what happened when I corrected the typo in one of the earlier videos.

It’s telling me the exact change the infrastructure was applied. As a network engineer by trade who executed hundreds of network changes, this type of information would have made my life much easier when I was implementing (from an auditing perspective, especially but also to roll-back changes to an acceptable state when I would run into some issues).

But to be fair, even if Terraform had been available at the time, network infrastructure wasn’t ready to be codified (that only became possible when API-driven cloud infrastructure and software-defined networking became common). Anyway – that’s a story for another day!

What’s Next?

If you want to hear more, come and attend the following webinars:

Tuesday Sept 22nd, 9:00am PDT

Wednesday, Sept 23rd, 9:00am BST/11:00am CEST/ 1:00am PDT

Looking forward to seeing you on!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s