Scale Testing with the Terraform count, for_each and dynamic arguments

I recently came across a requirement to make some scale tests on my VMware environment. I had to validate whether a limit was a hard limit (literally, the system prevents you from going above) or a soft limit (you can go above but it’s not recommended and you’re no longer supported).

Given that the limit was 10,000, it would have taken me a while to do it manually on the user console… Thankfully, I could just use some automation to run the tests. I picked Terraform but yes, arguably I could have done the same with Ansible, Python or PowerShell (just pick whatever tool you’re most comfortable with).

Terraform Dynamic Argument

The first option to create a vast amount of resources with Terraform is to use the count meta-argument:

variable "counter" { default = 5 }

resource "nsxt_policy_group" "groupscale" {
  count        = var.counter
  display_name = "group-scale.${count.index}"
  description  = "Terraform provisioned Group"
  domain       = "cgw"
  criteria {
    ipaddress_expression {
      ip_addresses = ["192.168.30.${count.index}/32"]
    }
  }
}

This is as easy as it gets – the code above will create 5 nsxt_policy_group resources and the name of each group and its IP address range will be based on the index as Terraform loops through the creation of each entity:

nvibert-a01:terraform-scale-test nicolasvibert$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # nsxt_policy_group.groupscale[0] will be created
  + resource "nsxt_policy_group" "groupscale" {
      + description  = "Terraform provisioned Group"
      + display_name = "group-scale.0"
      + domain       = "cgw"
      + id           = (known after apply)
      + nsx_id       = (known after apply)
      + path         = (known after apply)
      + revision     = (known after apply)

      + criteria {

          + ipaddress_expression {
              + ip_addresses = [
                  + "192.168.30.0/32",
                ]
            }
        }
    }

  # nsxt_policy_group.groupscale[1] will be created
  + resource "nsxt_policy_group" "groupscale" {
      + description  = "Terraform provisioned Group"
      + display_name = "group-scale.1"
      + domain       = "cgw"
      + id           = (known after apply)
      + nsx_id       = (known after apply)
      + path         = (known after apply)
      + revision     = (known after apply)

      + criteria {

          + ipaddress_expression {
              + ip_addresses = [
                  + "192.168.30.1/32",
                ]
            }
        }
    }

  # nsxt_policy_group.groupscale[2] will be created
  + resource "nsxt_policy_group" "groupscale" {
      + description  = "Terraform provisioned Group"
      + display_name = "group-scale.2"
      + domain       = "cgw"
      + id           = (known after apply)
      + nsx_id       = (known after apply)
      + path         = (known after apply)
      + revision     = (known after apply)

      + criteria {

          + ipaddress_expression {
              + ip_addresses = [
                  + "192.168.30.2/32",
                ]
            }
        }
    }

  # nsxt_policy_group.groupscale[3] will be created
  + resource "nsxt_policy_group" "groupscale" {
      + description  = "Terraform provisioned Group"
      + display_name = "group-scale.3"
      + domain       = "cgw"
      + id           = (known after apply)
      + nsx_id       = (known after apply)
      + path         = (known after apply)
      + revision     = (known after apply)

      + criteria {

          + ipaddress_expression {
              + ip_addresses = [
                  + "192.168.30.3/32",
                ]
            }
        }
    }

  # nsxt_policy_group.groupscale[4] will be created
  + resource "nsxt_policy_group" "groupscale" {
      + description  = "Terraform provisioned Group"
      + display_name = "group-scale.4"
      + domain       = "cgw"
      + id           = (known after apply)
      + nsx_id       = (known after apply)
      + path         = (known after apply)
      + revision     = (known after apply)

      + criteria {

          + ipaddress_expression {
              + ip_addresses = [
                  + "192.168.30.4/32",
                ]
            }
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

nvibert-a01:terraform-scale-test nicolasvibert$ terraform apply -auto-approve
nsxt_policy_group.groupscale[3]: Creating...
nsxt_policy_group.groupscale[4]: Creating...
nsxt_policy_group.groupscale[2]: Creating...
nsxt_policy_group.groupscale[1]: Creating...
nsxt_policy_group.groupscale[0]: Creating...
nsxt_policy_group.groupscale[2]: Creation complete after 1s [id=9a03e72d-2704-4119-aad7-d214c412fffe]
nsxt_policy_group.groupscale[4]: Creation complete after 2s [id=028c193a-a19f-4aeb-9384-556f0d31626f]
nsxt_policy_group.groupscale[3]: Creation complete after 2s [id=78fa5b74-cb5f-4582-a01d-49e4ebb35993]
nsxt_policy_group.groupscale[1]: Creation complete after 2s [id=05639648-34bd-46f6-b2d9-6232d34bddb1]
nsxt_policy_group.groupscale[0]: Creation complete after 2s [id=d2e35922-93fc-418c-a328-28e7afcc6e42]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

That works a treat and obviously if I want to scale up to 1,000 groups, I can just change the counter value to “1,000”. I get this from my “terraform plan”…

Plan: 1000 to add, 0 to change, 0 to destroy.

…when I set my configuration file to this:

variable "counters" {default = 1000 }
resource "nsxt_policy_group" "ManyGroup" {
  count        = var.counters
  display_name = "Group_.${count.index}"
  description  = "Terraform provisioned Group"
  domain       = "cgw"
  criteria {
    condition {
      key         = "Tag"
      member_type = "VirtualMachine"
      operator    = "EQUALS"
      value       = "Tag_.${count.index}"
    }
  }

But what if you have nested blocks? By nested blocks, I refer to network_interface, disk, clone in a standard Terraform for vSphere configuration.

What if you want to check, for example, how many network NICs you can add to your VMs?

resource "vsphere_virtual_machine" "vm_terraform_from_cl_2" {
  name             = "vm_terraform_from_cl_2"
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  folder           = "Workloads"

  num_cpus = 2
  memory   = 1024
  guest_id = "other3xLinux64Guest"

  network_interface {
    network_id = data.vsphere_network.network13.id
  }
  disk {
    label            = "disk0"
    size             = 20
    thin_provisioned = true
  }
  clone {
    template_uuid = data.vsphere_content_library_item.library_item_photon.id
    customize {
      linux_options {
        host_name = "Photon"
        domain    = "vmc.local"
      }
      network_interface {
        ipv4_address = cidrhost(var.subnet1, 201)
        ipv4_netmask = 24
      }
      ipv4_gateway = cidrhost(var.subnet1, 1)
    }
  }
}

You cannot apply “count” to nested blocks. Instead, we’re going to use a new-ish feature (came with 0.12) of Terraform that enables you to create multiple nested blocks dynamically.

Dynamic and for_each

The example below is for the NSX-T provider. To create distributed firewall rules, you need to create a policy (also referred to as a ‘section’) and create rules within the policy.

The config below creates 2 security rules (“Blue2Red” and “Red2Blue”) within the “Colors” policy.

resource "nsxt_policy_security_policy" "Colors" {
  display_name = "Colors"
  description = "Terraform provisioned Security Policy"
  category = "Application"
  domain = "cgw"
  locked = false
  stateful = true
  tcp_strict = false

  rule {
    display_name = "Blue2Red"
    source_groups = [
      nsxt_policy_group.Blue_VMs.path]
    destination_groups = [
      nsxt_policy_group.Red_VMs.path]
    action = "DROP"
    services = ["/infra/services/ICMP-ALL"]
    logged = true
  }
  rule {
    display_name = "Red2Blue"
    source_groups = [
      nsxt_policy_group.Red_VMs.path]
    destination_groups = [
      nsxt_policy_group.Blue_VMs.path]
    action = "DROP"
    services = ["/infra/services/ICMP-ALL"]
    logged = true
  }
}

You can see that a rule is built around the same content, with the format being:

  rule {
    display_name = "name"
    source_groups = [group_path]
    destination_groups = [group_path]
    action = "action"
    services = [service_path]
    logged = true
  }

An elegant way of creating many rules would be to use the dynamic and for_each commands:

resource "nsxt_policy_security_policy" "nvibert-scale" {
  display_name = "Scale-Security-Policy"
  description  = "Terraform provisioned Security Policy"
  category     = "Application"
  domain       = "cgw"
  locked       = false
  stateful     = true
  tcp_strict   = false

  dynamic "rule" {
    for_each = range(1,20)
    
    content {
      display_name = "rule_name_${rule.value}"
      source_groups = []
      destination_groups = [nsxt_policy_group.Red_VMsNico.path]
      action   = "DROP"
      services = ["/infra/services/ICMP-ALL"]
      logged   = true
      sequence_number = rule.value
  }
  }
}

As you can see above, the rule is now dynamic and every rule I create will be based on the content defined in – well, content.

We’re using the range function to specify how many rules we are going to create within the policy. Be aware that range(1,20) starts at 1 and finishes at 19.

The display_name and the sequence_number are automatically updated as we iterate through the loop. Note how I refer to “rule.value” – when you using for_each, you have to refer to the name of the dynamic resource (here, it’s called rule) and then use the keyword value to get its value.

# nsxt_policy_security_policy.nvibert-scale:
resource "nsxt_policy_security_policy" "nvibert-scale" {
    category        = "Application"
    description     = "Terraform provisioned Security Policy"
    display_name    = "Scale-Security-Policy"
    domain          = "cgw"
    id              = "d081f7be-99bf-4c27-8004-9d656ae9a9d4"
    locked          = false
    nsx_id          = "d081f7be-99bf-4c27-8004-9d656ae9a9d4"
    path            = "/infra/domains/cgw/security-policies/d081f7be-99bf-4c27-8004-9d656ae9a9d4"
    revision        = 0
    scope           = []
    sequence_number = 0
    stateful        = true
    tcp_strict      = false

    rule {
        action                = "DROP"
        destination_groups    = [
            "/infra/domains/cgw/groups/ed921a6b-8a3b-434a-b55c-aafaeb8b782b",
        ]
        destinations_excluded = false
        direction             = "IN_OUT"
        disabled              = false
        display_name          = "rule_name_1"
        ip_version            = "IPV4_IPV6"
        logged                = true
        nsx_id                = "fc0dd4d2-5239-4860-9842-9a5a50be2b06"
        profiles              = []
        revision              = 0
        rule_id               = 0
        scope                 = []
        sequence_number       = 0
        services              = [
            "/infra/services/ICMP-ALL",
        ]
        source_groups         = []
        sources_excluded      = false
    }
    

///////////////// cut for brevity ////////////////////
    rule {
        action                = "DROP"
        destination_groups    = [
            "/infra/domains/cgw/groups/ed921a6b-8a3b-434a-b55c-aafaeb8b782b",
        ]
        destinations_excluded = false
        direction             = "IN_OUT"
        disabled              = false
        display_name          = "rule_name_19"
        ip_version            = "IPV4_IPV6"
        logged                = true
        nsx_id                = "18defb8b-f3d5-4f3f-aae9-5abf32e306ef"
        profiles              = []
        revision              = 0
        rule_id               = 0
        scope                 = []
        sequence_number       = 18
        services              = [
            "/infra/services/ICMP-ALL",
        ]
        source_groups         = []
        sources_excluded      = false
    }
}

The for_each and dynamic commands can be used for other things than just scaling. You might have a list of values you want to apply. In this case, you can refer to a variable. In this instance, my list is manually set to [100,200,300,400,500] but you can imagine that you have generated via other means.

variable "groups" {
  type        = list(number)
  description = "list of ingress ports"
  default     = [100, 200, 300, 400, 500]
}


resource "nsxt_policy_security_policy" "nvibert-scale" {
  display_name = "Scale-Security-Policy"
  description  = "Terraform provisioned Security Policy"
  category     = "Application"
  domain       = "cgw"

  dynamic "rule" {
    for_each = var.groups
    content {
      display_name       = "rule_name_${rule.value}"
      source_groups      = []
      destination_groups = [nsxt_policy_group.Red_VMsNico.path]
      action             = "DROP"
      services           = ["/infra/services/ICMP-ALL"]
      sequence_number    = rule.value
    }
  }
}

This configuration would create the following firewall rules:

  # nsxt_policy_security_policy.nvibert-scale will be created
  + resource "nsxt_policy_security_policy" "nvibert-scale" {
      + category        = "Application"
      + description     = "Terraform provisioned Security Policy"
      + display_name    = "Scale-Security-Policy"
      + domain          = "cgw"
      + id              = (known after apply)
      + locked          = false
      + nsx_id          = (known after apply)
      + path            = (known after apply)
      + revision        = (known after apply)
      + sequence_number = 0
      + stateful        = true
      + tcp_strict      = (known after apply)

      + rule {
          + action                = "DROP"
          + destination_groups    = (known after apply)
          + destinations_excluded = false
          + direction             = "IN_OUT"
          + disabled              = false
          + display_name          = "rule_name_100"
          + ip_version            = "IPV4_IPV6"
          + logged                = false
          + nsx_id                = (known after apply)
          + revision              = (known after apply)
          + rule_id               = (known after apply)
          + sequence_number       = 100
          + services              = [
              + "/infra/services/ICMP-ALL",
            ]
          + sources_excluded      = false
        }
      + rule {
          + action                = "DROP"
          + destination_groups    = (known after apply)
          + destinations_excluded = false
          + direction             = "IN_OUT"
          + disabled              = false
          + display_name          = "rule_name_200"
          + ip_version            = "IPV4_IPV6"
          + logged                = false
          + nsx_id                = (known after apply)
          + revision              = (known after apply)
          + rule_id               = (known after apply)
          + sequence_number       = 200
          + services              = [
              + "/infra/services/ICMP-ALL",
            ]
          + sources_excluded      = false
        }
      + rule {
          + action                = "DROP"
          + destination_groups    = (known after apply)
          + destinations_excluded = false
          + direction             = "IN_OUT"
          + disabled              = false
          + display_name          = "rule_name_300"
          + ip_version            = "IPV4_IPV6"
          + logged                = false
          + nsx_id                = (known after apply)
          + revision              = (known after apply)
          + rule_id               = (known after apply)
          + sequence_number       = 300
          + services              = [
              + "/infra/services/ICMP-ALL",
            ]
          + sources_excluded      = false
        }
      + rule {
          + action                = "DROP"
          + destination_groups    = (known after apply)
          + destinations_excluded = false
          + direction             = "IN_OUT"
          + disabled              = false
          + display_name          = "rule_name_400"
          + ip_version            = "IPV4_IPV6"
          + logged                = false
          + nsx_id                = (known after apply)
          + revision              = (known after apply)
          + rule_id               = (known after apply)
          + sequence_number       = 400
          + services              = [
              + "/infra/services/ICMP-ALL",
            ]
          + sources_excluded      = false
        }
      + rule {
          + action                = "DROP"
          + destination_groups    = (known after apply)
          + destinations_excluded = false
          + direction             = "IN_OUT"
          + disabled              = false
          + display_name          = "rule_name_500"
          + ip_version            = "IPV4_IPV6"
          + logged                = false
          + nsx_id                = (known after apply)
          + revision              = (known after apply)
          + rule_id               = (known after apply)
          + sequence_number       = 500
          + services              = [
              + "/infra/services/ICMP-ALL",
            ]
          + sources_excluded      = false
        }
    }

While this might seem like simple commands, I didn’t find that many explanations of them. Before I conclude, these commands are not just to create a lot of resources and to test scaling but to create multiple resources based on a list of variables.

Thanks for reading.

PS: Thanks to Gilles for his help with this post!


Additional reading resources and examples:

Advertisement

One thought on “Scale Testing with the Terraform count, for_each and dynamic arguments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s