When customers start using cloud infrastructure, there are five key networking components they want to replicate within their virtual cloud environment:
Out-of-the-box, VMware Cloud on AWS gives you the first four components but the Load-Balancing was somehow lacking. Yes, we could use virtual appliances from vendors like F5. Yes, we could deploy Avi load-balancers on VMC, like I blogged about a couple of years ago.
Yes, we could even automate most of it like I did with Nicolas Bayle, as we blogged together late last year.
But while automation was great, Nicolas and I thought we could do with a slick user interface for those who may not be familiar with the automation tools we were using (Terraform and Ansible).
The result is a new Fling: EasyAvi (a.k.a Easy Deploy for NSX Load Balancing).
Before we speak about how the product was built, let’s talk about why we decided to build it and for who.
As mentioned above, on the VMware Cloud console, we can easily build VPNs, networks and firewall security policies. But we lacked the ability to natively install a load-balancer.
And anyone who wants to build highly reliable applications will need a load-balancer.
So we decided to build something for the dozens of VMware Cloud on AWS customers who have been asking for a load-balancing solution.
While customers could deploy Avi manually, we could see reluctance from some customers – possibly because they didn’t know the Avi brand or because they were intimidated by some of the Avi terminology and architecture.
We think packaging it in a Fling and making the process extremely simple to install and deploy (25 minutes from start to finish) could turn this tool into the preferred method of deploying a load-balancing in VMware Cloud on AWS – and potentially any vSphere environment.
Hence, why we nicknamed it “EasyAvi“.
Once the Fling has been downloaded, install it like you would install any OVA. Access it over HTTPS.
The first step is to input your API token. That’s pretty much the only requirement.
Using the API token, we will bring you the list of organizations you belong to. Select the org.
Select the SDDC you want to deploy the load-balancer in.
You can create a public IP for the controller (brief reminder – the Avi controller is essentially the management/control plane while the service engines provide the data plane) and automatically add Distributed Firewall Exclusion lists for the Service Engines. If needed, change the controller settings.
Once you press Next, you can create a Service Engine. You can use the default settings or change accordingly. You can add multiple SE groups if you like.
Then, you can even deploy a test application (Internet facing or not) to check how the Avi will load-balance its traffic.
Select the networks for the management of the controller and for the VIP IP addresses (and, assuming you are deploying the sample app, for the application).
Finally, you need to input your MyVMware account to download the appliance. Press Submit and then Apply.
In about 20 minutes, the appliance will create networks, security rules, request public IPs, deploying multiple virtual machines, including a temporary jump host, and deploy controller and service engines.
Once the deployment is finished, the IP addresses (private and/or public) of the controller will be displayed so that you can log into your fully deployed load-balancer.
And now you can enjoy your load-balancer! Play around with your sample app if you had decided to deploy or configure a VIP and start doing some load-balancing.
If you decide to deploy the sample app, with its public IP address, you can go directly to:
- https://my-vs-ip/hello will redirect to the basic application
- https://my-vs-ip/avi will redirect to the advanced application
And you can start seeing the stats and load-balancing performances on the Avi portal.
It takes roughly 20 minutes to deploy something that could easily take a couple of days if you were doing it manually. Easy, right?
We hope you enjoy the Fling – feel free to share your feedback or experience on Twitter or on the Flings portal.
A lesson in Product Management:
The best outcome for a Fling is that it eventually becomes integrated within a product or become its own product. But until then, it’s your product to manage, research, market, develop, improve and support.
User research was done by not only talking to existing VMware Cloud customers but also by listening to our colleagues’ questions: we understood there was a gap to fill.
Before diving straight into the code, we actually thought long and hard about the user experience and what the overall workflow would be. We drew it out using the Miro tool.
The architecture of the tool ended up leveraging multiple programming languages and tools – because of personal preferences and previous projects. We ended up using:
As the Fling Product Managers and as perfectionists, it also dawned on us that we might never release the tool.
Each internal and customer demo would provide great feedback but also a lot of feature requests. The three of us would come up with even more ideas (“what if we provide support on Azure VMware Solution or Google Cloud VMware Engine?”) but at some stage, we decided to focus on the features required for a Minimum Viable Product (MVP).
Let’s build something robust, simple and that addresses 80% of the requirements.
Once the Fling is out, we will prioritize features accordingly, while improving documentation, tracking user adoption and success stories and supporting the product and addressing bug fixes… before, hopefully, seeing our efforts being directly integrated into the product.
Thanks for reading.