Hi LucD and Co. I'm about to take on an interesting pet project surrounding automation of vSphere "things" based on custom attribute values found on VMs and I'm posting an open question for ideas as to how to go about doing this. To be very clear, I'm not asking for people to hand over code to me--only a methodology for those who have worked with custom attributes in PowerCLI before. I know it's not difficult, per se, but since there are always multiple ways to skin a cat I'm wondering what the shortest, cleanest, most efficient path may be here.
Background
VMware PKS deploys K8s clusters as virtual machines. These clusters can be of various sizes and contain one or more masters and one or more workers. Each node is represented by a single VM. Every VM belonging to the same cluster has a custom attribute written into a field called "deployment". An example value of this "deployment" field is "service-instance_9c854520-8e43-4566-845e-111e6a1e8425". Additionally, a second field is present called "job" which specifies the role type for each VM, values of which are either "master" or "worker". All VMs in the same deployment are also connected to the same NSX-T logical switch which appears in vSphere's inventory as a custom switch type. For example, all VMs in the same deployment that have "service-instance_9c854520-8e43-4566-845e-111e6a1e8425" written into that custom attribute would exist on a logical switch named "pks-9c854520-8e43-4566-845e-111e6a1e8425". There are other logical switches created that also begin with this name but end with different suffixes.
Problem Statement
While this system is all well and good, there are some problems created and therefore gaps I wish to close:
- When a cluster is created containing multiple masters, if those masters land on the same vSphere cluster there are no anti-affinity DRS rules automatically created. This obviously creates the possibility that a single ESXi host failure takes out more than half of the masters effectively rendering a K8s cluster down.
- All VMs from a deployment/K8s cluster have names created in the format of "vm-{GUID}" and are dumped into the same vSphere inventory location. This makes it difficult for administrators to quickly pinpoint VMs associated with the same deployment/K8s cluster.
Goals
Based on the above problems/gaps, I wish to
- Automate the creation of anti-afinity DRS rules. In order to do this, a few different pieces of logic have to be implemented.
- VMs have to be scanned for the "deployment" custom attribute value. Alternatively, the logical switch can be scanned, but this object is not reported by the "Get-VDportgroup" or "Get-VirtualPortgroup" cmdlets.
- VMs need to be matched up based this "deployment" value. Alternatively, the logical switch can be scanned.
- The "job" value needs to be read on those sharing the same "deployment" value and, if there are more than 1 master found, the names of all masters need to be stored in memory.
- If all masters reside in the same vSphere cluster, an anti-affinity rule should be created with all masters as members forcing them to run on separate hosts.
- Automate the creation and organization of all VMs in a given deployment into a vSphere folder. Some slightly separate logic is needed here.
- VMs have to be scanned for the "deployment" custom attribute value.
- VMs need to be matched up based on this "deployment" value.
- A new folder should be created in a given inventory location with the name matching "deployment" value.
- All VMs sharing the same "deployment" value should be moved into the newly-created vSphere folder.
Now that you have all the information, how would you tackle this from a high-level perspective? I think the emphasis needs to be on speed and as much reduced impact on the vCenter API as possible because these PKS VMs could exist in a sea of thousands, and recursing through the entire vCenter inventory would probably not go well. Interested in ideas, commentary, and questions.
Message was edited by: Chip Zoller Added note about these NSX-T logical switches are not returned by PowerCLI available cmdlets in modules VMware.VimAutomation.Core or .Vds.