At the end of the last year, I asked on twitter if anyone used an empty VMware Cluster (without HA, DRS, DPM enabled) to aid in their scripting/building of a vSphere environment. As ever I was asked “why”, and it was then I realised that 140 characters wasn’t going to be enough to explain my question – which incidentally I don’t really feel I have answer too. I’m a bit like that. I ask questions I don’t know the answer to – as opposed to asking questions where I do know the answer [something a former colleague of mine the 90’s used to all the time!]
So here’s the thinking. Since about ESX 2.x to ESX 4.1 (and a bit of 5.0) the main engine I used for configuring VMware ESXi hosts was post-scripts executed at the end of the installation. Generally, I would use the Ultimate Deployment Appliance (UDA) to accelerate that process. For many people this remains a popular method of rolling out VMware ESX. However, I’ve alway had issues with this method because installing VMware ESXi is just one small task amongst many that I would have carry out – say if I was building my lab to learn the next version of VMware View or VMware SRM. For instance I can’t use this sort of scripting to configure a VMware ESXi hosts membership of a cluster or Distribute vSwitch. So for some years now I’ve been using VMware PowerCLI to vast majority of this work because it has such a rich set of cmdlets that allow me to automate my entire build both of the VMware ESXi host AND the vCenter inventory objects. It feels neater to me to use one very rich method of carrying out automation tasks, rather than using two different methods together (UDA/Anaconda-style scripts with PowerCLI handling the rest)
So far so good. For me its almost made sense to add all the hosts into vCenter, and then begin the process of configuring them. That’s because you can use For-each loops to carry out bulk-administration tasks on every single host – rather than connecting to each host individually. After all one VMware ESXi host doesn’t differ from another host – in fact the very reason for these scripting task is get consistency, so the VMware ESXi hosts can be treated like cattle rather than cats (to use current metaphor that is in vogue). So here’s the quandary.
Whilst my many VMware ESXi hosts are all very similar, they have unique attributes (such as hostname, IP address and so on), but they also share some common attributes as well (such as access to the same storage LUNs/Volumes/NFS exports, Distributed vSwitches and VLANs). It’s generally the case in most environments that the “cluster” acts as virtual silo with one cluster generally not having access to another clusters resources. The assumption is that if some rogue admin monkey’s with the configuration of a cluster the impact is felt within one cluster – not all. Imagine for instance a storage admin changing the masking of LUNs/Volumes which results in all the storage “disappearing” from a cluster. The trouble I feel is how to best differentiate one bunch of servers from another. An example might help illustrate:
Example:
I have added 64 VMware ESXi hosts into vCenter. I now want to create a unique vSwitch and Storage configuration to hosts 1-32, 33-64 and 65-96. These will ultimately end up in ClusterA (1-32), ClusterB (33-64) and ClusterC (64-96). It’s important that VLANs and Storage is only available to the hosts in each cluster.
So how best to identify or group these host when using my For-each loop? I could create 3 datacenters, and that would allow me to use the get-datacenter to make sure that configuration only goes to the right VMware ESXi host in those datacenters. That seems a bit ugly to me. I could use Powershell ranges (1..32) to use a For-each loop that would only be applied to esx01nyc to esx32nyc. Again, that seems a bit clunky. I could use a big ole .CSV file making sure and use references within it differentiate one collection of servers from another (I actually think is quite a good approach…)
One idea had was to create both the datacenter, and the cluster (A,B,C) but not enable the DRS/HA features – and then add the VMware ESX hosts. The idea here is that the cluster acts as attribute I can reference using get-datacenter. Once the host within this cluster have got access to the networks and storage they need – the properties of HA/DRS could be enabled. After all there dependencies to met – HA will want heartbeat datastores & redundancy on the management network – and DRS will need at least one vmkernel port enabled for vMotion. Once HA/DRS had been enabled I could set about using my script to define resource pools….
I’d be interest to know if anyone else does this. If they think this is a bad idea or not. I’m quite happy to quit the field if people think what I’m proposing is a bad idea. I’m just curious to know peoples opinions – and if they think there’s a better way of doing it…