Networking on VMC on AWS – Internal Networking

This short post describes the internal networking configuration within the VMware Cloud on AWS SDDC and how NSX is leveraged and configured.

In VMware Cloud on AWS, we have two logical domains – one for “Management Resources” (where the ESXi hosts, vCenter, NSX Manager and NSX Controllers are deployed) and one for “Compute Resources”, where data VMs are deployed.

Internal Networking on VMC on AWS – High Level Logical View

NSX in VMware Cloud on AWS

The whole VMware Cloud on AWS Networking setup is automated as part of infrastructure provisioning.

All the NSX components such as Manager and Controller are deployed in the Management Pool.

NSX Components in Management Pool

Below is a more detailed view of how NSX is deployed within VMware Cloud on AWS (diagram courtesy of Gilles Chekroun).

Detailed View of Networking Within VMware Cloud on AWS

As you can see above, we have a Tier 0 router (described at the Edge Appliance) – it’s our core router, with a link up to the Internet via an AWS Internet Gateway and it’s where VPNs and Direct Connect terminate. It is also where we would interconnect with native AWS services over the Elastic Network Interface.

The Tier 0 router connects to two Tier 1 routers – the MGW router and the CGW router (read more on these here). All the internal routing between the Tier 0 router and everything else is taken care of and is hidden from our customers.

Logical Segments

VMware Cloud on AWS administrators can decide on which subnet the compute VMs will be located.

They can use the Cloud Console or the APIs to create Network Segments (also sometimes referred to as Logical Switches or Logical Networks).

Just go on the ‘Cloud Console / Networking & Security / Network / Segments’ to create new network segments to connect your VMs to.

You can enable DHCP on each network segment (note that the DHCP server provided is locally connected to CGW).

The Compute Gateway did not support DHCP Relay until May 2019. Read more on DHCP Relay on the following blog.

The “Type” can either “Routed”, “Extended” or “Disconnected“.

Routed networks are the default type. These networks use the SDDC Compute Gateway as the default gateway. Routed networks have connectivity to other logical networks in the same SDDC and to external network services such as the SDDC firewall and NAT.

Extended networks require a layer 2 Virtual Private Network (L2VPN), which provides a secure communications tunnel between an on-premises network and one in your cloud SDDC.

Disconnected networks are not connected to the CGW. A disconnected network is an isolated segment. Workloads on other network segments cannot reach workloads on the isolated network segment. It is used by HCX but also can be used for other purposes (such as application heartbeats).

When using “Routed” network, you must specify the default gateway and the prefix length. By defining a “10.74.1.1/24” Gateway/Prefix Length, we automatically conclude the CIDR would be 10.74.1.0/24 and would start allocating IP addresses from 10.74.1.2 upwards through DHCP if DHCP is required.

Network Segments

These networks can connect to the Internet using NAT through the CGW when the appropriate FW rules are defined on it. Note that any bandwidth charges incurred by traffic leaving the CGW will be billed back through your VMware Cloud on AWS account.

Connectivity between routed networks within the SDDC is routed locally so traffic stays within the SDDC, but for extended networks the default gateway must be on the on-premises side, so traffic will need to go to the on-premises network to be routed.

Default Gateway remains on-prem for extended network

Virtual Distributed Switch – NSX-V and NSX-T

You might come across various documentations and blogs about VMware Cloud on AWS and its internal networking and might have seen conflicting information. This is explained by the fact we used ‘NSX for vSphere’ version in early iterations of VMware Cloud on AWS but any new customers on VMC will now be based on ‘NSX-T’ as it gives us access to features we didn’t have access to with ‘NSX-V’.

As a VMC customer, it should be transparent to you but that would explain why some of the documentations you come across might be inconsistent.

Some of the information shared below might be more details than you actually need to know but it might satisfy the more curious ones.

On ‘NSX-V’ deployments of VMware Cloud on AWS, the virtual distributed switch is deployed and we can create logical switches/segments on top of the VDS. You could not create additional port groups on the distributed switch but instead create new logical networks.

NSX-V-Based VMware Cloud on AWS leverages the VDS

On ‘NSX-T’ deployments of VMware Cloud on AWS (which are now in 2019 the default configuration), you will not see a VDS in the Cloud vCenter as we use the NSX-T Virtual Distributed Switch (N-VDS). Remember that NSX-T is not tied to vCenter the same way NSX-V was as NSX-T can be deployed across various hypervisors, bare-metal servers, native cloud instances, etc…

NSX-T-Based VMware Cloud on AWS leverages the N-VDS

Instead, we can see the logical networks (created on the Cloud Console) in the ‘Networks’ section.

NSX-T-Based VMware Cloud on AWS Networks