The Virtual Machine. Remember it? Yeah, I know the VM its hardly a new fangled thing now. What I mean is improvements to the VM…

Sometime we get so focused on hosts, tools, clients, networks – we forget about the humble VM the reason for all that engineer effort. It makes me think of my colleague Han Berdhardt. I met him for the first time last year at a VMUG. He works in the “Center of Excellence” and is a long standing VMware employee. Number 68 apparently. I wonder what employee number I am?

Hans and me hit it off immediately. He’s as crazy-enthusiastic as I am. If you think I can never shut up about virtualization and generally running around like a mad thing – I must introduce you to Hans sometime. I’m positively sedate and languid in comparison! Han’s has a really great presentation that puts the VM at the centre of the world – he likens it to a dartboard. I’ve likened his idea to a kind anti-7-layer ISO model for network that we have ALL endured for decades. Strictly speaking its not architecturally correct – since you can’t have VM without piece hardware and some kind of hypervisor. It’s more of analogy or manifesto for saying the focus of our world should be supporting the VM. I tried to draw a diagram. Sadly, I wasn’t blessed with the genius of Michelangelo….

part5-image1.png

At some stage I’m going to pin Hans down and get him to do a whiteboard session with me – so he can walk us through his own unique view of the world.

vCPU up to 64-vCPU

Anyway, back to the VM. What’s new here? You know how we had the “Monster VM”.  Well, that’s sooooooooo last year (imagine me doing my best “Valley Girl” impression – yes I know that’s really difficult, and mentally disturbing all at the same time). I like to think we have create the “MegaBeast VM”. Joking apart the real change here is a change in the CPU count. We now support 64 vCPU per-VM as opposed to 32-vCPU. The RAM allocation remains the same at 1TB.

New 3D Virtual GPU

There’s an improved virtual GPU. That allows for improved 3D graphics support if the ESX host has the NVIDIA (Quadro/Tesla) GPU. It’s not enabled by default in the VM and right now its dormant as feature until the new version of VMware View ships. This is about the platform having the pieces in place so it can be used by the next generation of virtual desktops. The usage cases are pretty obivious – with enhance responsiveness for applications that particularly demand it such as CAD and medical applications. Right now it works with Windows 7 and requires at least 64MB  of virtual memory allocated to the VM.

Thin Provision Reclamation

Finally, a feature that has done the rounds of nearly all the top bloggers this year – including Gabrie Van Zantan and Jason Boche – Thin Provision Reclamation. In the bad old days the only way to recoup “orphaned free disks space” caused by file deletes – was to run a VM through a truly horrible sdelete.exe process which write out any deleted files (and balloon up a thin virtual disk at the same) time, followed by an one-two upper-cut of SVMotion to claw back the space.  In recent months there’s been 3rd party tools and flings to handle this issue. It is now being addressed directly in the platform.

vSphere5.1 introduces as new Space Efficient (SE) Sparse VMDK format. Where this issue has been particularly apparent is in the area of virtual desktops and linked clones. Again just like the new GPU support it won’t get leverage until the next version of VMware View is released. Once the two technologies are aligned the ESX will be able send a SCSI Unmap command to the array to free up the blocks that are no longer needed. I will be able to elaborate on this more once the new version of View becomes generally available.

I will be writing about the SE Sparse Disk in the “Storage” – that will be part 7.

Improved CPU Virtualization

Now this one is really there for the home labbers out there. For sometime we have been able to run ESX “nested” inside a VM. In fact we use “nested” ESX in the VMworld Labs with great success for some years. However, its always been a pretty internal thing with that functionality only been accessible by editing the .VMX file and adding sometimes quite tricky enteries that would change from one release of ESX to another. These parameters are now being exposed in the web-client. It’s being dubbed “Virtualized Hardware Virtualization” (VHV – is that right the cAPS and CaSe?). It will allow for two things really. It will allow you to run Windows7/8 guest in “XP Mode” with better performance, and should allow for a “nested” ESX configuration out-of-the-book. Of course that latter use is still in unsupported territory.

part5-image2.png

Host Profiles and Auto-Deploy:

The “Host Profiles” and “Auto Deploy” features both are recipients of updates and improvements.  As you might know Host Profiles allow you to “copy” the configuration of existing “reference” host, to create generic set of parameters you can apply to any number of ESX host. The Host Profiles continues to have new features add to it. Host Profiles now supports the ability to:

  • Configure DirectPath IO Settings
  • Modify the /etc/hosts file enteries
  • Password PAM Configuration
  • PAM Authentication Path Policy
  • Virtual Machine Swapfile location

Host Profile still requires maintenance mode, but it should allow more customized configuration when they are used with “auto-deploy”.  Remember auto deploy is the process by which you can make an ESXi host stateless/diskless and deliver the ESXi software across a PXE boot. Auto Deploy in vSphere 5.1 now has three different modes:

  • Stateless (Not new, and delivered in vSphere5.0)
  • Stateless Caching (New)
  • Stateless Installs (New)

One of the concerns surrounding auto deploy was it is dependencies on the network – and network services such as TFTP and DHCP. Of course these services are extremely robust and there’s much that can be done protect yourself from an outage. Nonetheless some customers have expressed anxieties about the approach. Stateless caching essentially offers a PlanB to auto deploy, such that the build is “cached” to either local storage disk – either spindles or USB (1GB minimum). This caching process happens on first boot to auto deploy. In the BIOS the server is configured to PXE boot first, and only to use the “cached” version if the server fails to PXE boot for whatever reason.

part5-image3.png

This caching mode is an essentially a troubleshooting mode. It is enabled by setting in the auto deploy settings referred to as “Stateless Image Cache Configuration”.

Screen Shot 2012-08-20 at 16.08.32.png

When you select these options – the USB option will just use the first available USB stick available to the server, if you choose “Stateless caching on the host”, then you will need to specify which disk will be used, and if a VMFS volume is present – whether it is over-written or not. The settings are similar to those you would find in weasel (nee kickstarts install script). The important point to note here is the host is not added to vCenter in this state – and will not have a host profile assigned to it. There’s nothing stopping the administrator adding the ESX host into vCenter, and applying a host profile to it if they wish.

As you might suspect “stateful installs” uses the auto deploy infrastructure as install engine to deploy ESX to conventional media. Once the hosts have been provisioned the auto deploy infrastructure is no longer required. That does mean you loose some of the benefits of auto deploy such as the easy way of replacing one image build with another. You would use tools like VUM to update the installation to a new build.