In recent customer conversations, I’ve noticed that Terraform is being used not only more often but also by more teams within an organization. The teams could be:
- Network
- Compute
- Cloud
- Internet Security
- Firewall
- IP Address and DNS Management
- etc…
Obviously, in smaller organizations, all these responsibilities might be within one or two teams but in some of the larger enterprises I have worked with, each area was handled by a separate team.
As these teams start adopting Terraform, what you might find is that they want to interact with each other or share output or configuration. One way to achieve this with Terraform Cloud (or Terraform Enterprise if you really need to self-host) is by using multiple Terraform workspaces.

A workspace is aligned to a VCS repo (GitHub, Gitlab, etc…) – there is a 1:1 relationship between them – so the workspace will represent the resources defined by the Terraform files in the repo.
Let’s walk through a simple but concrete example of how that would work.
Imagine you’ve got a cloud team responsible for deploying an app in AWS. Their Terraform file might be something simple like this:
# Configure the AWS Provider
provider "aws" {
region = "eu-west-2"
}
resource "aws_vpc" "my_vpc" {
cidr_block = "172.16.0.0/16"
}
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.my_vpc.id
}
resource "aws_subnet" "my_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "172.16.10.0/24"
availability_zone = "eu-west-2a"
}
resource "aws_network_interface" "foo" {
subnet_id = aws_subnet.my_subnet.id
private_ips = ["172.16.10.100"]
}
resource "aws_instance" "foo" {
ami = "ami-089539692cca55c6c" # eu-west-2
instance_type = "c4.4xlarge"
network_interface {
network_interface_id = aws_network_interface.foo.id
device_index = 0
}
}
output "instance_id" {
value = aws_instance.foo.id
}
output "instance_ami_arn" {
value = aws_instance.foo.arn
}
Essentially: a VPC, a subnet, a network interface, an Internet Gateway and an EC2 instance using these artefacts. This file is on my GitHub, which the one linked to my Terraform Cloud workspace.

Once you commit the change in your GitHub repo or if you queue the plan manually, Terraform Cloud will build the infrastructure below.

You can see the full run details (Plan/Cost Estimation/Policy Check/Apply) on the UI.

Brief interlude
What you might also see above is that I failed a policy check advisory. I actually used Sentinel (see these previous post and post on Sentinel) to check if the instance type meets my cost profile (it didn’t – I shouldn’t be using c4.4xlarge but a much cheaper t2 instance type instead).

Once Terraform has completed its run, you will see in the UI that a successful run triggered another workspace.
Imagine that your compliance/Internet security team might not want to expose these resources to the Internet and they might not want this EC2 instance to have an elastic Public IP address attached to it. The Internet Security team actually want to make this decision whether or not to allow it.
I have actually configured another workspace (terraform-aws-eip) that leverages the first one (terraform-aws-sentinel) as a source workspace.
A successful apply on the source workspace will trigger a run of the destination workspace.

This second/destination workspace will create an Elastic Public IP – pending approval – and will refer to the first one.
The first block below calls the source workspace to pull out its state. What we do here is referring to the ID of the EC2 instance created during the other workspace.
data "terraform_remote_state" "vpc" {
backend = "remote"
config = {
organization = "nvibert-organization"
workspaces = {
name = "terraform-aws-sentinel"
}
}
}
provider "aws" {
region = "eu-west-2"
}
resource "aws_eip" "eip" {
instance = data.terraform_remote_state.vpc.outputs.instance_id
vpc = true
}
output "eip" {
value = aws_eip.eip.public_ip
}
The “Internet Security” team will decide whether or not to approve the Public IP address creation.

What you might also have noticed is that the workspaces leverage different Terraform versions, which is also something that can easily happen – a customer I talked to recently still used 0.11 to manage their VMware resources while another team was using a more recent version to deploy their AWS resources. Terraform Cloud gives you the ability to specify which version of Terraform you want to run, per workspace.
Overall, it’s interesting to work out how to address the challenges customers run into from using Terraform locally to using Terraform at scale.
Video
Here is the whole thing recorded (no audio this time). In the first video, I make a change to my code, which triggers a Terraform Run in my first workspace.
In the second video, we look at the trigger of the second workspace and the second Terraform run.
Thanks for reading.
Hi, thanks for the write-up. I don’t seem to be able to see your videos, it just shows a black screen.
LikeLike