As if I needed more ideas for my blog, William Lam challenged me to test a nice little tool he had just blogged about.
The blog was about leveraging a cool tool – vcsim – that records an existing vCenter inventory, restores a fake version of it and lets you interact with this simulated vCenter through APIs as if you were making real API calls:
This is pretty awesome – read the blog and have a go!
As Willian knows my fondness for Terraform, he gave me a challenge I couldn’t refuse:
The instructions on his blog were perfectly clear and I had saved my VMware Cloud on AWS vCenter inventory in minutes:
nvibert-a01$ export GOPATH=/Users/nicolasvibert/gocode nvibert-a01$ echo $GOPATH /Users/nicolasvibert/gocode nvibert-a01$ go get github.com/vmware/govmomi/govc nvibert-a01$ go get github.com/vmware/govmomi/vcsim nvibert-a01$ nvibert-a01$ export GOVC_PASSWORD=password nvibert-a01$ export GOVC_INSECURE=1 nvibert-a01$ export GOVC_USERNAME=cloudadmin@vmc.local nvibert-a01$ export GOVC_URL=https://vcenter.sddc-A-B-C-D.vmwarevmc.com/sdk nvibert-a01$ $GOPATH/bin/govc object.save Saved 103 total objects to "vcsim-vcenter.sddc-A-B-C-D.vmwarevmc.com", including: Datastore: 2 Folder: 10 HostDatastoreBrowser: 2 OpaqueNetwork: 19 ResourcePool: 3 VirtualMachine: 13
Now that I have saved my vCenter’s inventory, I can simulate it by running the following command:
nvibert-a01$ $GOPATH/bin/vcsim -load vcsim-vcenter.sddc-A-B-C-D.vmwarevmc.com/ export GOVC_URL=https://user:pass@127.0.0.1:8989/sdk GOVC_SIM_PID=65075
My fake vCenter is now running in the background. Open a new terminal and start getting your Terraform ready.
Now, I just grabbed my previous Terraform configs – check any of my previous Terraform posts for an explanation of what these files are:
variables.tf
variable "data_center" { default = "SDDC-Datacenter" } variable "cluster" { default = "Cluster-1" } variable "workload_datastore" { default = "WorkloadDatastore" } variable "compute_pool" { default = "Compute-ResourcePool" } variable "vsphere_user" {} variable "vsphere_password" {} variable "vsphere_server" {} variable "Subnet13_name" { default = "seg13" } variable "subnet13" { default = "13.13.13.0/24" }
terraform.tfvars
vsphere_user = "foo" vsphere_password = "bar" vsphere_server = "localhost:8989"
You can specify any value in the user/password as vcsim accepts anything.
Make sure you specify the right port (vcsim uses 8989 by default).
main.tf
provider "vsphere" { user = var.vsphere_user password = var.vsphere_password vsphere_server = var.vsphere_server allow_unverified_ssl = true } data "vsphere_datacenter" "dc" { name = var.data_center } data "vsphere_compute_cluster" "cluster" { name = var.cluster datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_datastore" "datastore" { name = var.workload_datastore datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_resource_pool" "pool" { name = var.compute_pool datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_network" "network" { name = "sddc-cgw-network-1" datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_network" "network13" { name = var.Subnet13_name datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_virtual_machine" "template" { name = "Blue-VM-1" datacenter_id = data.vsphere_datacenter.dc.id } resource "vsphere_folder" "folder" { path = "terraform-test-folder" type = "vm" datacenter_id = data.vsphere_datacenter.dc.id } resource "vsphere_tag_category" "environment" { name = "environment" cardinality = "SINGLE" associable_types = [ "VirtualMachine" ] } resource "vsphere_tag_category" "region" { name = "region" cardinality = "SINGLE" associable_types = [ "VirtualMachine" ] } resource "vsphere_tag" "environment" { name = "test-dev" category_id = vsphere_tag_category.environment.id } resource "vsphere_tag" "region" { name = "UK" category_id = vsphere_tag_category.region.id } resource "vsphere_virtual_machine" "vm" { name = "terraform-test" folder = "Workloads" resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id datastore_id = data.vsphere_datastore.datastore.id firmware = data.vsphere_virtual_machine.template.firmware wait_for_guest_net_timeout = 0 wait_for_guest_ip_timeout = 0 num_cpus = 2 memory = 4096 guest_id = data.vsphere_virtual_machine.template.guest_id annotation = data.vsphere_virtual_machine.template.disks.0.size network_interface { network_id = data.vsphere_network.network.id adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0] } disk { label = "disk0" size = data.vsphere_virtual_machine.template.disks.0.size eagerly_scrub = data.vsphere_virtual_machine.template.disks.0.eagerly_scrub thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned } scsi_type = data.vsphere_virtual_machine.template.scsi_type clone { template_uuid = data.vsphere_virtual_machine.template.id } tags = [ vsphere_tag.environment.id, vsphere_tag.region.id, ] }
The Terraform configuration file executes some read-only commands (that’s what data blocks are for) to check the id of the (fake) resources we have running and it also creates new folders, tags and a new VM:
bash-3.2$ terraform plan An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # vsphere_folder.folder will be created + resource "vsphere_folder" "folder" { + datacenter_id = "datacenter-3" + id = (known after apply) + path = "terraform-test-folder" + type = "vm" } # vsphere_tag.environment will be created + resource "vsphere_tag" "environment" { + category_id = (known after apply) + id = (known after apply) + name = "test-dev" } # vsphere_tag.region will be created + resource "vsphere_tag" "region" { + category_id = (known after apply) + id = (known after apply) + name = "UK" } # vsphere_tag_category.environment will be created + resource "vsphere_tag_category" "environment" { + associable_types = [ + "VirtualMachine", ] + cardinality = "SINGLE" + id = (known after apply) + name = "environment" } # vsphere_tag_category.region will be created + resource "vsphere_tag_category" "region" { + associable_types = [ + "VirtualMachine", ] + cardinality = "SINGLE" + id = (known after apply) + name = "region" } # vsphere_virtual_machine.vm will be created + resource "vsphere_virtual_machine" "vm" { + annotation = "20" + boot_retry_delay = 10000 + change_version = (known after apply) + cpu_limit = -1 + cpu_share_count = (known after apply) + cpu_share_level = "normal" + datastore_id = "datastore-48" + default_ip_address = (known after apply) + ept_rvi_mode = "automatic" + firmware = "bios" + folder = "Workloads" + force_power_off = true + guest_id = "other3xLinux64Guest" + guest_ip_addresses = (known after apply) + hardware_version = (known after apply) + host_system_id = (known after apply) + hv_mode = "hvAuto" + id = (known after apply) + ide_controller_count = 2 + imported = (known after apply) + latency_sensitivity = "normal" + memory = 4096 + memory_limit = -1 + memory_share_count = (known after apply) + memory_share_level = "normal" + migrate_wait_timeout = 30 + moid = (known after apply) + name = "terraform-test" + num_cores_per_socket = 1 + num_cpus = 2 + poweron_timeout = 300 + reboot_required = (known after apply) + resource_pool_id = "resgroup-9" + run_tools_scripts_after_power_on = true + run_tools_scripts_after_resume = true + run_tools_scripts_before_guest_shutdown = true + run_tools_scripts_before_guest_standby = true + sata_controller_count = 0 + scsi_bus_sharing = "noSharing" + scsi_controller_count = 1 + scsi_type = "pvscsi" + shutdown_wait_timeout = 3 + storage_policy_id = (known after apply) + swap_placement_policy = "inherit" + tags = (known after apply) + uuid = (known after apply) + vapp_transport = (known after apply) + vmware_tools_status = (known after apply) + vmx_path = (known after apply) + wait_for_guest_ip_timeout = 0 + wait_for_guest_net_routable = true + wait_for_guest_net_timeout = 0 + clone { + template_uuid = "4205bc4d-6845-d929-9840-4a46b0b8358c" + timeout = 30 } + disk { + attach = false + controller_type = "scsi" + datastore_id = "<computed>" + device_address = (known after apply) + disk_mode = "persistent" + disk_sharing = "sharingNone" + eagerly_scrub = false + io_limit = -1 + io_reservation = 0 + io_share_count = 0 + io_share_level = "normal" + keep_on_remove = false + key = 0 + label = "disk0" + path = (known after apply) + size = 20 + storage_policy_id = (known after apply) + thin_provisioned = true + unit_number = 0 + uuid = (known after apply) + write_through = false } + network_interface { + adapter_type = "vmxnet3" + bandwidth_limit = -1 + bandwidth_reservation = 0 + bandwidth_share_count = (known after apply) + bandwidth_share_level = "normal" + device_address = (known after apply) + key = (known after apply) + mac_address = (known after apply) + network_id = "network-o26" } } Plan: 6 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run.
It worked seamlessly and I can create a fake VM in my fake vCenter. Nice!
bash-3.2$ terraform apply -auto-approve vsphere_tag_category.environment: Creating... vsphere_folder.folder: Creating... vsphere_tag_category.region: Creating... vsphere_tag_category.environment: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:5279eb8e-9da6-4b7c-9b7f-d3136b1a601b:GLOBAL] vsphere_tag_category.region: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:a26fef4c-2a36-42f7-8f7e-ecdd4f253967:GLOBAL] vsphere_folder.folder: Creation complete after 0s [id=folder-2] vsphere_tag.environment: Creating... vsphere_tag.region: Creating... vsphere_tag.region: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:1185d19e-6538-4b14-8c0c-2ba1983f8726:GLOBAL] vsphere_tag.environment: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:5be715a9-19fa-4866-815e-ff4edc5d66a9:GLOBAL] vsphere_virtual_machine.vm: Creating... vsphere_virtual_machine.vm: Creation complete after 1s [id=5c258e7a-a9b3-597a-9705-c2750ff84318] Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

What’s even cooler is that you can interact with other automation tools against this virtual environment. For example, if I just use PowerCLI, I can see the fake stuff I created with Terraform 🤯
PS /Users/nicolasvibert> Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Confirm:$false PS /Users/nicolasvibert> Connect-VIServer -Server localhost -Port 8989 -User tt -Password Bar Name Port User ---- ---- ---- localhost 8989 tt PS /Users/nicolasvibert> Get-VM -Name terraform-test Name PowerState Num CPUs MemoryGB ---- ---------- -------- -------- terraform-test PoweredOn 2 4.000
If you want to check this Terraform working configuration against my SDDC, go and check my GitHub repo here. Install vcsim using William’s instructions, clone my repo and you can simulate what I’ve just done above.
What about Python?
There is a vSphere Management SDK python binding library called pyVmomi and a number of sample scripts located here. These scripts are written in Python and are quite simple to consume with any vCenter. Let’s see if they work with a fake one too!
Instead of running the script locally, I thought I’d check to see if the recorded vCenter can really be shared across environments.
So I fired up an AWS EC2 instance, uploaded with scp my vcsim files, installed Python, pip, git, go and finally pyvmomi:
[ec2-user@ip-172-201-10-100 ~]$ pip install pyvmomi
Then I downloaded the samples project:
[ec2-user@ip-172-201-10-100 ~]$ git clone https://github.com/vmware/pyvmomi-community-samples.git
I started the fake vCenter, like I did before:
[ec2-user@ip-172-201-10-100 samples]$ $GOPATH/bin/vcsim -load vcsim-vcenter.sddc-52-39-251-191.vmwarevmc.com export GOVC_URL=https://user:pass@127.0.0.1:8989/sdk GOVC_SIM_PID=17458
Then I ran one of the samples Python scripts that simply collects the list of VMs from vCenter. As you can see, running the script is very easy – you just have to specify the vCenter server (here, the simulated one is local on the EC2 instance so use 127.0.0.1), use the port 8989 and you can use any username and password.
[ec2-user@ip-172-201-10-100 samples]$ python3 getallvms.py -s 127.0.0.1 -u user -p password --port 8989 Name : Easy_AVI_Appliance_1.0.0 Template : False Path : [WorkloadDatastore] 14bae15f-2d37-d93a-4995-06a0a848f92f/Easy_AVI_Appliance_1.0.0.vmx Guest : Other (32-bit) Instance UUID : 500546e9-7133-1d53-2233-05c368cb5317 Bios UUID : 42051220-061a-c630-6815-4a839d99c97d Annotation : Version: 1.0.0 State : poweredOff VMware-tools: toolsNotRunning IP : None Name : vcenter Template : False Path : [vsanDatastore] a1e4dc5f-7b69-f688-bd2a-06a0a848f92f/vcenter.vmx Guest : Other 3.x or later Linux (64-bit) Instance UUID : 52dc1e51-e83a-f46f-ef35-5e4487a289a7 Bios UUID : 564d9b67-0eb3-8687-a723-cc6ecbbae3a6 Annotation : VMware vCenter Server Appliance State : poweredOn VMware-tools: toolsOk IP : 10.10.1.196
That’s it!
Use Cases
I can see a huge potential for this, for a number of use cases:
Homelab
This could be awesome for anyone who… wants to get rid of their homelab 😁😁
I’m joking but then… if you want to run some API tests against a vCenter environment… why don’t you could download a copy of William’s labs on his GitHub and simply run your API commands against it?
Learning
Anyone with vSphere experience who wants to learn how to use APIs and automation can leverage this without having to worry about breaking a real environment.
As I have many upcoming plans to teach and encourage others to learn about APIs and automation (including an upcoming session with Patrick Kremer), I can see myself using this to teach folks how to use PowerCLI, Python and Terraform without having to provide a dedicated live vCenter. It would also provide much more predictable results.
Testing Changes
For customers that want to validate changes on their live vCenter environment, they can simply record their live vCenter, execute their API scripts on the simulated/vcsim version before running it on the live environment.
Obviously the simulated environment will have restrictions and won’t be a perfect replacement for a real vCenter… but this is pretty good, right? What I’d love would be to see a similar tool for NSX where we could simulate networking and security changes 😁
Thanks for reading.
Great article(s) Nico (and William).
“What I’d love would be to see a similar tool for NSX where we could simulate networking and security changes”
I couldn’t agree more! As an SDN specialist, the ability to:
– Showcase production changes without risk.
– Show the power of automation.
– Audit environments with little access.
– etc.
… would be fantastic! It could help paint the NSX picture to so many customers.
I really look forward to the NSX version of vcsim.
LikeLike