Introducing HashiCorp Terraform Provider for NSX-T Policy Manager and VMware Cloud on AWS

Last update on 12th January 2021: Added a comment about a new option “nsxt_policy_predefined_gateway_policy” that removes the requirement to run “terraform import”.

Previous update on 2nd September 2020: Terraform NSX-T Policy provider is automatically downloaded when running “terraform init” (no need to compile it – read further below for more details).

This blog post will cover how to use the new Terraform provider with the VMware NSX-T Policy Manager.

Like everything on my personal blog, be mindful that all my tests are on a lab platform… Use any of the tools used in this post with care as I won’t be responsible for any issues you may run into. Thank you for your understanding.

In previous blog posts, you might recall I covered how you can use the Infrastructure-As-Code tool Terraform to provision VMware resources (vSphere and VMware Cloud on AWS).

In VMware Cloud on AWS, we used to have a limitation around networking and security – the NSX-T Terraform provider was not compatible with VMware Cloud on AWS.

The reason it wasn’t compatible is because VMware Cloud on AWS uses the NSX-T Policy APIs and not the ‘traditional’ NSX-T APIs (Luca did a great job on his blog explaining the differences between the standard NSX-T APIs and the NSX-T Policy APIs).

The NSX-T Policy APIs will also be the preferred API for the consumption of NSX-T on-prem, not just in the Cloud.

The good news is that VMware has now released a new version of the provider that supports the NSX-T Policy APIs! The new provider has been extended to support all the NSX-T Provider resources. All objects created against the Policy APIs will have a prefix of “policy”.

For example, instead of using the resource “nsxt_ns_group” to create a network security group, we’ll be using “nsxt_policy_group”.

Requirements:

Update 2nd September 2020: The NSX-T Policy provider has been public for a while now so the steps described in “Installation” are no longer required. Go straight to the “Configuration” section further below. No need for Git or Go either or to specify the version of NSX-T Provider you want to use. I am leaving in the blog for anyone who needs to understand how to build a provider from scratch.

What this means is that, when you do “terraform init”, the provider will be automatically downloaded:

The provider is not yet ‘official’ – that is, it is not the one automatically downloaded from Terraform when you run “terraform init”. So you will need to do like we did for Terraform for VMware Cloud on AWS and manually download and build the provider.

So we need Git and Go installed. It will no longer be required once the provider is officially published on HashiCorp.

Installation

First, we create a folder and move our shell prompt to that directory.

# Setting my GOPATH - this is optional:
bash$ export GOPATH=$HOME/go
# I create a folder for my terraform provider.
bash$ sudo mkdir -p $GOPATH/src/github.com/terraform-providers
# I move into this folder:
bash$ cd $GOPATH/src/github.com/terraform-providers

Then, we are going to manually pull the public code from GitHub using git clone and download and install GoLang packages and dependencies with “go get“.

bash$ git clone https://github.com/terraform-providers/terraform-provider-nsxt/
Cloning into 'terraform-provider-nsxt'...
remote: Enumerating objects: 19, done.
remote: Counting objects: 100% (19/19), done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 14405 (delta 5), reused 9 (delta 3), pack-reused 14386
Receiving objects: 100% (14405/14405), 15.94 MiB | 2.30 MiB/s, done.
Resolving deltas: 100% (7738/7738), done.
bash$ cd $GOPATH/src/github.com/terraform-providers/terraform-provider-nsxt
bash$ go get
go: downloading golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0
go: extracting golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0
bash$ 
bash$ go get
bash$ go build -o terraform-provider-nsxt

bash$ ls -l
total 88632
-rw-r--r--@ 1 nicolasvibert  staff      2419 30 Jan 13:36 main.tf
-rwxr-xr-x  1 nicolasvibert  staff  45363924 30 Jan 13:34 terraform-provider-nsxt
-rw-r--r--@ 1 nicolasvibert  staff       246 23 Jan 19:10 terraform.tfvars
-rw-r--r--@ 1 nicolasvibert  staff       119 23 Jan 19:10 vars.tf

bash$ terraform providers 
.
└── provider.nsxt

Couple of things to pay attention to: if you do a “terraform init” and you have “nsxt” provider specified in your main.tf, by default, it will use the public one stored by HashiCorp (version 1.1.2). As explained above, this version 1.1.2 is a few months old and doesn’t support NSX-T Policy:

bash$ terraform init

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration, so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below.

* provider.nsxt: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
bash$ terraform version
Terraform v0.12.16
+ provider.nsxt v1.1.2

And that’s why I had to add the version parameter below to force Terraform not to use the public one but the one I have built with ‘go‘ earlier. Note this version requirement can also be specified in a separate ‘versions.tf’ file.

provider "nsxt" {
  version              = "!= 1.1.2"
}

When I re-initialized Terraform with ‘terraform init‘, I’m using the ‘unversioned’ version – the one I created from GitHub.

nvibert-a01:Terraform-NSX-T-Live nicolasvibert$ terraform version
Terraform v0.12.16
+ provider.nsxt (unversioned)

As it’s a 3rd party provider, it needs to be ideally installed and downloaded in a specific folder (see instructions here). In my tests, I kept the ‘terraform-provider-nsxt‘ file with my main.tf and it worked fine.

Configuration

And here is a working template below you can use. Copy it and call it main.tf. This template will create a security group, a security service and a Distributed Firewall security section with a security rule using the group and service created.

provider "nsxt" {
  host                 = "${var.host}"
  vmc_token            = "${var.vmc_token}"
  allow_unverified_ssl = true
  enforcement_point    = "vmc-enforcementpoint"
}

resource "nsxt_policy_security_policy" "policy2" {
  domain       = "cgw"
  display_name = "policy2"
  description  = "Terraform provisioned Security Policy"
  category     = "Application"

  rule {
    display_name       = "rule name"
    source_groups      = ["${nsxt_policy_group.mygroup2.path}"]
    action             = "DROP"
    services           = ["${nsxt_policy_service.nico-service_l4port2.path}"]
    logged             = true
  }
}

resource "nsxt_policy_group" "mygroup2" {
  display_name = "my-policy-group2"
  description  = "Created from Terraform"
  domain       = "cgw"

  criteria {
    ipaddress_expression {
      ip_addresses = ["211.1.1.1", "212.1.1.2", "192.168.1.1-192.168.1.100"]
    }
  }
}

resource "nsxt_policy_service" "nico-service_l4port2" {
  description  = "L4 ports service provisioned by Terraform"
  display_name = "service-s2"

  l4_port_set_entry {
    display_name      = "TCP82"
    description       = "TCP port 82 entry"
    protocol          = "TCP"
    destination_ports = ["82"]
  }

You will also need a file called vars.tf to define your variables:

variable "host" {
description = "VMC NSX-T REVERSE PROXY URL"
}

variable "vmc_token" {
    description = "VMC Token"
}

When using VMware Cloud on AWS, you need to specify the reverse proxy URL as the ‘host’ and the VMC token to be able to authenticate. For on-prem NSX-T, you need to specify your NSX-T manager, username and password to consume the resources.

Finally, you will also need a file called terraform.tfvars in the same folder with your secrets:

host = "nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/orgs/AAAAAAAAAAAAAA/sddcs/BBBBBBBBB/sks-nsxt-manager"
vmc_token = "XXXXXXXXXXXXXXXXXXXXXXXXX"

Walkthrough

In the video (with audio) below, I walk through the whole process:

Terraform NSX-T Policy Provider

Alternatively, here are the commands I ran in the video:

# to initialize a Terraform working directory
terraform init
# to check the actual Terrafom version
terraform version
# to validate the syntax of the HCL or JSON TF file
terraform validate
# to generate and show an execution plan
terraform plan
# to build or change the infrastructure
terraform apply

Just to make sure it’s actually worked, I put up a quick walkthrough video of the actual user interface (again, with audio):

Terraform-created resources on UI

VMware Cloud on AWS Tips

There are a few other things you need to know when you consume NSX-T Policy resources on VMware Cloud on AWS.

First, as I mentioned earlier, to authenticate against the VMC NSX-T instance, we need to use the API Token (and not a username/password). We also need to specify the Reverse Proxy URL (William Lam described the whole process in this excellent post).

For VMC, you also need to specify something called the NSX ‘enforcement point’. For VMC, its value is always “vmc-enforcementpoint“.

provider "nsxt" {
  host                 = "nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/orgs/org-id/sddcs/sddc-id/sks-nsxt-manager"
  vmc_token            = "XXXXXXXXXXXXXXXXXXXX"
  allow_unverified_ssl = true
  enforcement_point    = "vmc-enforcementpoint"
}

With the settings above, you can now create network segments, groups, services and distributed firewall rules.

For the VMC edge firewalls, there is a bit more work to do.

Let’s say you want to create a new security rule on your edge firewalls, either the Compute Gateway or the Management Gateway.

The way edge FW rules are designed in our provider is like this:

resource "nsxt_policy_gateway_policy" "cgw_policy" {
  display_name    = "default"
  description     = "Terraform provisioned Gateway Policy"
  category        = "LocalGatewayRules"
  domain          = "cgw"
  rule{
	display_name = "test-rule"
	scope        = ["/infra/labels/cgw-all"]
}
}

A “nsxt_policy_gateway_policy’ is essentially a firewall section on an NSX-T Edge T1 Gateway. Within the policy (a.k.a section), we have “sub-resources” rules nested within the policy configuration.

It is not possible on VMware Cloud on AWS to have multiple gateway policies: you just have the one per device and it’s created by default.

Up until October 2020 and the Terraform provider 3.1.0, there were some limitations. I will explain what’s changed.

Before Terraform Provider 3.1.0

To update the rules, we needed to import the existing gateway policy and update it accordingly with the right security rules.

Terraform Import

The terraform import command, used in the terminal, will take an already-created resource under Terraform control.

Import the configuration with:

terraform import nsxt_policy_gateway_policy.policy1 cgw/default

What you need to remember is first to list this resource in your main.tf, otherwise you will get an error message:

$ terraform import nsxt_policy_gateway_policy.policy1 cgw/default 
Error: resource address "nsxt_policy_gateway_policy.policy1" does not exist in the configuration.

Before importing this resource, please create its configuration in the root module. For example:

resource "nsxt_policy_gateway_policy" "policy1" {
  # (resource arguments)
}

I added the nsxt_policy_gateway_policy called cgw_policy in my main.tf and re-imported the resource, this time successfully.

resource "nsxt_policy_gateway_policy" "mgw_policy" {
  category     = "LocalGatewayRules"
  display_name = "default"
  domain       = "mgw"
}

resource "nsxt_policy_gateway_policy" "cgw_policy" {
  category        = "LocalGatewayRules"
  description     = "Terraform provisioned Gateway Policy"
  display_name    = "default"
  domain          = "cgw"
}

I do it for both the cgw and the mgw:

$ terraform import nsxt_policy_gateway_policy.cgw_policy cgw/default
nsxt_policy_gateway_policy.cgw_policy: Importing from ID "cgw/default"...
nsxt_policy_gateway_policy.cgw_policy: Import prepared!
  Prepared nsxt_policy_gateway_policy for import
nsxt_policy_gateway_policy.cgw_policy: Refreshing state... [id=default]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

nvibert-a01:Terraform-NSX-T-Live nicolasvibert$ terraform import nsxt_policy_gateway_policy.mgw_policy mgw/default
nsxt_policy_gateway_policy.mgw_policy: Importing from ID "mgw/default"...
nsxt_policy_gateway_policy.mgw_policy: Import prepared!
  Prepared nsxt_policy_gateway_policy for import
nsxt_policy_gateway_policy.mgw_policy: Refreshing state... [id=default]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Great – the MGW and CGW are now under Terraform control and we can go ahead and update the rules using Terraform.

When I then display the existing rules with “terraform show“, I can see what network policies have been created by default on the MGW and CGW:

$ terraform show
# nsxt_policy_gateway_policy.cgw_policy:
resource "nsxt_policy_gateway_policy" "cgw_policy" {
    category        = "LocalGatewayRules"
    description     = "Terraform provisioned Gateway Policy"
    display_name    = "default"
    domain          = "cgw"
    id              = "default"
    locked          = false
    nsx_id          = "default"
    path            = "/infra/domains/cgw/gateway-policies/default"
    revision        = 1
    sequence_number = 0
    stateful        = true
    tcp_strict      = false

    rule {
        action                = "ALLOW"
        destination_groups    = []
        destinations_excluded = false
        direction             = "IN_OUT"
        disabled              = false
        display_name          = "Any to Any Allow"
        ip_version            = "IPV4_IPV6"
        logged                = false
        profiles              = []
        revision              = 0
        rule_id               = 0
        scope                 = [
            "/infra/labels/cgw-all",
        ]
        sequence_number       = 0
        services              = []
        source_groups         = []
        sources_excluded      = false
    }
}


# nsxt_policy_gateway_policy.mgw_policy:
resource "nsxt_policy_gateway_policy" "mgw_policy" {
    category        = "LocalGatewayRules"
    display_name    = "default"
    domain          = "mgw"
    id              = "default"
    locked          = false
    nsx_id          = "default"
    path            = "/infra/domains/mgw/gateway-policies/default"
    revision        = 1
    sequence_number = 0
    stateful        = true
    tcp_strict      = false
    rule {
        action                = "ALLOW"
        destination_groups    = []
        destinations_excluded = false
        direction             = "IN_OUT"
        disabled              = false
        display_name          = "ESXi Outbound Rule"
        ip_version            = "IPV4_IPV6"
        logged                = false
        profiles              = []
        revision              = 0
        rule_id               = 0
        scope                 = [
            "/infra/labels/mgw",
        ]
        sequence_number       = 1
        services              = []
        source_groups         = [
            "/infra/domains/mgw/groups/ESXI",
        ]
        sources_excluded      = false
    }
    rule {
        action                = "ALLOW"
        destination_groups    = []
        destinations_excluded = false
        direction             = "IN_OUT"
        disabled              = false
        display_name          = "vCenter Outbound Rule"
        ip_version            = "IPV4_IPV6"
        logged                = false
        profiles              = []
        revision              = 0
        rule_id               = 0
        scope                 = [
            "/infra/labels/mgw",
        ]
        sequence_number       = 2
        services              = []
        source_groups         = [
            "/infra/domains/mgw/groups/VCENTER",
        ]
        sources_excluded      = false
    }
}

In my Terraform template, I’m going to modify the configuration of the mgw_policy slightly and add a rule to allow access to my vCenter:

resource "nsxt_policy_gateway_policy" "mgw_policy" {
  category     = "LocalGatewayRules"
  display_name = "default"
  domain       = "mgw"
  rule {
    action = "ALLOW"
    destination_groups = [
      "/infra/domains/mgw/groups/VCENTER",
    ]
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "vCenter Inbound set up by Terraform"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = [
      "/infra/services/HTTPS",
      "/infra/services/ICMP-ALL",
    ]
    source_groups    = []
    sources_excluded = false
  }
  rule {
    action                = "ALLOW"
    destination_groups    = []
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "ESXi Outbound Rule"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = []
    source_groups = [
      "/infra/domains/mgw/groups/ESXI",
    ]
    sources_excluded = false
  }
  rule {
    action                = "ALLOW"
    destination_groups    = []
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "vCenter Outbound Rule"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = []
    source_groups = [
      "/infra/domains/mgw/groups/VCENTER",
    ]
    sources_excluded = false
  }
}

When I do a “terraform apply”, the rule is applied as expected.

Terraform MGW Policy update

From 3.1.0 onwards

A new option was added in for VMC customers. Instead of using “import” – not the best command admittedly and something that has to be run as a command and not within the Terraform configuration – we can use the following (doc here).

This configuration below will create the same configuration as above but will save the hassle of importing, making the configuration much cleaner. What this does is just create the rules in the existing predefined policy. Make sure you have the default rules in there if you want to keep them (using a config like the one below would overwrite them).

resource "nsxt_policy_predefined_gateway_policy" "test" {
  path = "/infra/domains/mgw/gateway-policies/default"
  rule {
    action = "ALLOW"
    destination_groups = [
      "/infra/domains/mgw/groups/VCENTER",
    ]
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "vCenter Inbound set up by Terraform"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = [
      "/infra/services/HTTPS",
      "/infra/services/ICMP-ALL",
    ]
    source_groups    = []
    sources_excluded = false
  }
  rule {
    action                = "ALLOW"
    destination_groups    = []
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "ESXi Outbound Rule"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = []
    source_groups = [
      "/infra/domains/mgw/groups/ESXI",
    ]
    sources_excluded = false
  }
  rule {
    action                = "ALLOW"
    destination_groups    = []
    destinations_excluded = false
    direction             = "IN_OUT"
    disabled              = false
    display_name          = "vCenter Outbound Rule"
    ip_version            = "IPV4_IPV6"
    logged                = false
    profiles              = []
    scope = [
      "/infra/labels/mgw",
    ]
    services = []
    source_groups = [
      "/infra/domains/mgw/groups/VCENTER",
    ]
    sources_excluded = false
  }
}

This has been the third post in my HashiCorp Terraform series. You can now:

The next post in this series will focus on getting all these Terraform providers to work together…

Thanks for reading.

Advertisement

12 thoughts on “Introducing HashiCorp Terraform Provider for NSX-T Policy Manager and VMware Cloud on AWS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s