with many thanks to my colleague, Randy Curry – and special thanks from Stu Fox (@stufox) for their valuable assistance and input…
Before enabling the Hyper-V role in Windows Server 2012 I decided to setup two SMB/CIFS shares on my NetApp 2040s. Sadly, the firmware on these units aren’t particularly up to date, and they are running OnTap 8.0.1 in 7-Mode. I’ve contacted my buddies at NetApp to looking in getting the latest version of OnTap that does support SMB3.0. I guess I could setup another physical Windows instance and use SMB3.0 on it. But I’m put off by the idea of using Windows as storage platform, plus beside which between my two NetApp 2040 I have nearly 4TB of storage, and it seems a shame not to use that. After all I pay colo fee to run these bad boys.
Screen Shot 2013-07-03 at 10.12.24
You can check the firmware on a NetApp system using the System Manager, and selecting the array in the view. Sadly, after discussions with my pals in NetApp these controllers cannot be update to the firmware to get support for SMB 3.0, they said that would be like running Windows Server 2012 on a x486.

Undaunted by this, I setup two shares called HyperVMs and HyperLibrary. I wanted to have some kind of shared storage available from the get go. You’ll see why in a second.  These CIFS/SMB shares were mapped as drives on the two Hyper-V instances I have in the lab to a V: and L: share (V for Virtual Machines, and L for Library. Doh!)

As you probably know Hyper-V is merely “role” which you add to an existing Windows installation. At the end of the process you’re required to do a reboot [so there’s nothing new there].

 Screen Shot 2013-07-03 at 10.29.58

One of the things I’ve been doing in my journey through Microsoft virtualization is clocking up my reboots. When I was a VMware Certified Instructor (VCI) in the previous decade one of the things I was really proud of was being able to list on less than 4 fingers on one-hand the reboots required with VMware ESX. It’s been a while since I’ve done that count, but I think it still holds true. So far I’ve clocked up 5 mandatory reboots that include:

  • Renaming the server and joining to the domain
  • Installing MPIO
  • Enabling the Hyper-V Role
  • Installing the SCVMM Management Console
  • Installing the SCVMM Management Service requires a reboot

I’ve also done a couple of non-mandatory reboots just for my own peace of mind such as making sure that on a reboot my storage was acceptably mounted and my NIC teaming was working correctly. The NIC Team test I did before leaving the colo because I wanted to have confidence I could remotely reboot these host and still have remote control. You might recall these boxes don’t have the expensive blue IBM widget that gives me full KVM style access to the box.

One thing that irks me about the Hyper-V role is how Microsoft claims that Hyper-V is a hypervisor. But the more I use it and play around with the more it feels like just another service running in the OS partition. I guess what you could argue is making the virtualization role an aspect that your enabling your existing “Windows” skills. What I would really love to see is some very technical documentation that explains what precisely is going on at this stage. So far all I’ve found is very vague comments stating that the existing “OS” is “slipped under” the virtualization layer. Mmm, that doesn’t really sound very rigorous to me. Of course, I’m kind of use to this. In a previous life I was a Citrix Certified Instructor (CCI), and one of the pre-reqs for Citrix Metaframe/Presentation/XenApp/Call-it-what-you-will-this-year has always been adding Microsoft Terminal Services (now rebadged as Remote Desktop Services – RDS). In the past enabling terminal services replaced the normal “Single-WIN” ntoskernel.exe with “Multi-WIN” capabilities (which allows for multiple windows sessions to be established). You might recall Citrix started off re-writing the NT 3.5.1 kernel in their earlier “WinFrame” days – until Microsoft said they couldn’t with NT4.  So I imagine something like this happening. I’d really love to hear a more techy explanation. For now it feels like Windows is like a house that’s been jacked up on stilts with a new foundation being added underneath.

jackeduphouse

[Aside]

After writing this blog post, I did come across a forum post that did good job of defining what’s happening in this process. Here’s what he says:

When Hyper-V role is installed, Hvboot.sys is configured to start automatically

Hvboot.sys performs the following steps to initialize the hypervisor:

1) Detects whether a hypervisor is already loaded and, if so, aborts launching the hypervisor.

2) Calls a platform detection routine to determine if the processor is an Intel processor or an AMD processor and if it has virtualization extensions.

3) If the processor supports virtualization extensions, Hvboot.sys loads the hypervisor image that understands the architecture and virtualization extensions for the specific processor. The processor-specific hypervisor images are:

AMD-V:  %SystemRoot%\System32\Hvax64.exe
Intel VT: %SystemRoot%\System32\Hvix64.exe

4) Invokes the hypervisor launch code on all processors known to the parent operating system to start the hypervisor.

5) Initializes platform-specific per-processor structures and other hypervisor subsystems by using the processors’ virtualization extensions. When these operations are completed, the hypervisor is fully initialized.

6) A virtual processor is created for each physical processor and the parent operating system is isolated into the parent partition.

7) Control is returned to the parent operating system and at this point hypervisor is running ta Ring -1

[/Aside]

As for the reboot itself, it seems that a lot of people are content to accept this sort of thing. I was chatting to folks on twitter and some Microsoft folks seemed to take the view of “well, you are installing a hypervisor at this stage – so reboot is entirely reasonable”. As you might imagine – I’m slightly of the other opinion. I guess I’m used to VMware ESX where the only thing it does is virtualization. It’s not a general purpose OS retrofitted with a “role”. I think Microsoft could take a leaf of Redhat’s book, where during the installation you get to choose those roles. At least that way the system is ready at first boot for what you want to use it for – rather roles and reboots being the order of the day. For that reason it seems to would appear to make more sense to the use the free “Hyper-V server” which installs in a server core mode, with some additional hardening to prevent it being used for some other function than being a hypervisor. Whether Microsoft customers does this is another matter, I have feeling the install the same distribution used in a VM, to the physical box.

rhel
RHEL does have two distributions. A generic version of RHEL6.x as well as hypervisor only distribution – one is DVD .iso of some 3.5GB in size, and the other is .ISO of about 180MB. I can’t help feeling Microsoft should do the same/similar.

IMPORTANT: With the benefit of hindsight, I think the worst thing you could do is enable Hyper-V at the host level. If you have System Center Virtual Machine Mangaer (SCVMM) your far better of enabling there, when you first add the Windows Servers into the scope of it management. This means you can ignore all the messages and prompts asked in the role management part. If you don’t have SCVMM then the information below maybe of interest to you – if you do then you can skip this part altogether and head off to the section called With the Benefit of Hindsight…

Midway though adding the role you’ll be asked to configure networking, migration and default stores. It’s worth reading these dialog boxes closely to watch out for gotchas. For example, the welcome screen clearly states you have to sort out your networking well in advance of enabling Hyper-V.

Screen Shot 2013-07-03 at 11.00.10

That’s why I put a lot of thought into my networking and NIC Teaming before the very beginning. With VMware ESX I think things a little bit more flexible. The enablement of the networking and storage layer happens after you installed ESX, rather than it being something that needs to be configured upfront.

Personally, I think many of these settings are perhaps configured after Hyper-V has been enabled rather than during. For instance you’re asked to consider setting up virtual switches. Microsoft recommends setting up the virtual switch here – personally I wouldn’t. The naming convention they apply is a bit “generic” and isn’t particularly friendly. Secondly, I find it odd that recommend reserving an entire NIC for remote access. I guess I am doing that by not using the “Management-Team0” with virtual switches. I think that probably speaks to fundamental difference between Microsoft and VMware. For VMware the vSwitch is used to handle all network management whether it is for the ESX host or virtual machines from day one.

Screen Shot 2013-07-03 at 11.06.46
SCVMM supports a “logical switch” model that is akin to VMware’s Distributed vSwitch. So I’m not sure how likely it would be that folks would use this type of switch. Once I get the SCVMM up and running, I will probably dispense with this type of virtual switch altogether.

The next part of the wizard concerns enabling “Live Migration”.

Screen Shot 2013-07-03 at 11.26.09

You know that thing that allows you to moves VMs from one physical hosts to another. You might know it better at as “vMotion” from VMware. Something I was doing back in 2003/4 and Microsoft finally delivered in Windows Hyper-V 2008. [Just sayin’ that some of the best ideas Microsoft have about virtualization, come from you know who. Perhaps plagiarism is the sincerest form of flattery!]

So for me this dialog box seems pretty redundant to me. At some stage I will want a high-availability cluster configured. After all who wants a hypervisor with zero availability, right? From what the dialog box is saying this is best done when enabling a cluster. That means configuring the whole shooting match – live migrate and cluster in one big jumbo process. Rather than building up and confirming the layers, which is what I like to do with vSphere. It means when I come to create a cluster its pretty trivial affair.  I’ve met and verified the pre-requisites for the get-go. For the record I fail to understand what on earth CredSSO and Kerberos has got to do with moving a VM from one physical box to another. It certainly isn’t a requirement with VMware vSphere. I’m sure all will become clear at some stage! This dialog is confusing if you’re not from a Hyper-V background but is basically enabling the host as a “shared nothing” live migration target. So basically as I understand it authentication settings are how the standalone source server will authenticate to the standalone target server (as opposed to just letting any old server be able to migrate to you). If you intent on building a cluster you can ignore this as live migration between nodes of a cluster is handled as part of the clustering service.  Personally I think this would be better handled outside the initial setup wizard.

Next, you are asked to set the default locations for virtual disks and over Hyper-V settings. The dead give away is the default locations – for years it seems Microsoft have completed these fields with locations on the C: Drive – of course what we have to do as tireless SysAdmins is fix these.

Screen Shot 2013-07-03 at 11.45.41

Incidentally, at no stage in VMware ESX are you able to store VMs with the domain of the hypervisor – the danger as I see it if you don’t review these setting is snapshot growing until the C drive is full. Yes, you can use local storage as well as FC, iSCSI and NFS. Even when you use local storage this always a separate partition from the partitions that hold the VMware ESX installation. That assumes you have even done a local installation – with ESX you have the choice of USB/SD-Card, FC SAN booting or even PXE boot using the AutoDeploy feature where the software is delivered over the network using a combination of PXE/DHCP and TFTP.

Screen Shot 2013-07-03 at 11.50.52

Finally, our old friend confirmation and reboot – your only option here is defer the inevitable for a more suitable time. Given that you cannot do any further real work the reboot is pretty much mandatory.

At this stage I don’t have SCVMM spun up (that’s my next task really) – and so I’m lumbered with Hyper-V Manager for the moment. It’s pretty basic “MMC” still interface that you can add multiple Hyper-V instances in. My main job here is defining the virtual switches. As an experiment I let the Hyper-V role wizard set the switch up for me, but on my second Hyper-V host I bypassed that process altogether.

If you let the wizard create the virtual switch – you will have one created based on the NIC interface selected during the wizard itself. The interesting thing here is whilst my NIC team is called “VM-Team1”, the default switch is using a totally different name, that doesn’t map to my friendly-name convention. Instead it refers to something called “Microsoft Network Adapter Multiplexor Driver #2”.

The other thing I’m not a fan of is the default behaviour which allows the “management operating system” to use this switch. I consider such a configuration highly insecure – and not in keeping with a good practise. The only time this would be worthwhile is when you have so few NICs available (like one) in some sort of home lab style environment. As ever with Microsoft wizard the “Mr Clippy” approach to trying to help the SysAdmin, actually results in less than desirable outcome.

 mr-clip-it-says

Screen Shot 2013-07-03 at 14.14.00

In the end I renamed this switch to be called vSwitch1-External, and removed the troublesome default setting. Next using the Virtual Switch Manager link in Hyper-V Manager I added a new virtual switch to my “hyperv02nyc” host. On clicking OK, I got this worrying pop-up box:

Screen Shot 2013-07-03 at 14.20.38

This appears if you remove the “Allow management operating system to share this network adapter” option. If re-enabled the option then you get an equally worrying pop-up message:

Screen Shot 2013-07-03 at 14.22.33

That got me a bit worried as you might expect. In the end I cancelled the dialog box, to go hunting around for this “Microsoft Network Adapter Multiplex Driver #2” and making 100% sure it did map to the “VM-Team1” I’d created earlier. After wading though the layers of the Microsoft UI I was able to find this identifier. There are a lot of object names used in Microsoft’s NIC Teaming – the Ethernet name, the Team name, and then this Device Alias. I decided Hyper-V was trying to make me feel frightened with all these bogus CYA warnings and it was safe to proceed. They fall within the class of better to warn the user about a potential problem rather than not warn at all. I guess my anxiety levels were higher because I have no ILO/DRAC/BMC access to my servers – something you wouldn’t really experience in a production environment.

I took a nosey about the Hyper-V Manager. There’s not much write home about here, the only thing that caught my eye was it appeared my preference for default stores had been ignored. These had reverted back to being C: Drive locations despite setting V: as the destination. Perhaps this is bug in R2 or I’d made a mistake.

You would think after more than 15 years of using Windows we would have finally got to the point where reboots for reconfigurations were a thing of the past. In a hypervisor they have no role to play. Every reboot for a reconfiguration could entail a lengthy evacuation of the host of all the VMs – making the maintenance windows required to complete tasks longer and longer…

With the Benefit of Hindsight…

In the end I decide that most SysAdmins would be best advised to avoid enabling the Hyper-V role at the server level. A much better choice would be to go ahead an install SCVMM.  A right-click on the “Hosts” node allows you to add Windows Hyper-V hosts in by name or by the IP address of the cluster.  Its worth saying that SCVMM does have bear metal deployment that uses a combination of ILO/DRAC/BMC settings and PXE to do install Windows Hyper-V and enrol the server into SCVMM.

Note: Brace yourself. Here comes a complement to Microsoft, where I think they do something better than VMware. I know who’d a thunk it!

One thing I like about SCVMM add host wizard is the way you can specific multiple servers in the list. It’s a small item really, but important to one personal. I feel ALL management system should, by design come with the ability to do carry out “bulk administration” tasks without the need for scripting. So where an option, action or setting is typical carried to multiple objects – then it should be possible to do that using GUI tools.

Screen Shot 2013-07-22 at 06.46.55
It does help if you can type your hostnames correctly – hyperv02ynd should be hyperv02nyc!!!

For me it’s a principle. If you accept this principle then it cascades through everything the management system can do. I guess in the world of Microsoft this is something that GPO’s are designed to handle, and in the world of VMware “Host Profiles” deal with this. The trouble is the host profile feature is Enterprize+ only. That leaves significant majority having to script actions where vCenter does allow for bulk administration. It’s small point but one that I feel personally is important. Just sayin’.

Once the Hyper-V hosts have been discovered, you can select them. If the HyperV role has not been enabled, then you will receive a prompt indicating that it will be and this will trigger a reboot. It’s perhaps important to remember garbage in equals garbage out, and woe betide any fat-fingered admin who adds some other windows instance here, and accidentally enables Hyper-V and triggers a reboot.

Screen Shot 2013-07-22 at 06.53.19

Most of the SCVMM wizards have in their top-right corner a “View Script” option that exposes PowerShell script information.

Screen Shot 2013-07-22 at 06.56.30
I rather like the easy access to script samples in SCVMM, something I remarked on the last time I looked at Windows Hyper-V in 2008

This is another part of Microsoft product I rather like. I think encourages the use of Powershell. As big fan of VMware’s extensions called PowerCLI, I think anything that encourages automation and scripting is to be commended. I guess any good Powershell scripter would say these samples aren’t ideal. For example they would probably say using a .CSV file or using some sort of for-each loop would be a better process to bulk add many servers would be a better approach. But laying that aside, what I like is the way this “View Script” button has been integrated into the UI. As understand the development of these interfaces is such that the Powershell cmdlet is developed first, and then a GUI is wrapped around it such that its very easy for Microsoft to provide the “View Script” button because what is being used under the covers is Powershell.

Once you click the finish button SCVMM will create a job that enabled the role, and reboots the Hyper-V hosts for you. You may have to dig around to see this job (Hint: It’s in the job node in the right-hand corner!), as rather than having a “Task” pane which is always present on screen as it is in vCenter – you need to switch views to watch the progress of the progress bars.

Screen Shot 2013-07-22 at 07.03.05

I have had problems with is process whilst writing this post. From what I can gather there maybe an issue with WinRM or DNS name resolution that prevents this from being successful.

HyperV-MicrosoftFailsToAddHost

However, in this case the adding of Hyper-V to SCVMM was successful with information. That’s a term used to describe a process being successful in SCVMM but that that may require further investigation. As you might recall from my earlier posts, I’ve rather been battling with getting all the network requirements in place to be able to accommodate redundancy on my management and storage network, and at the same time get the heartbeat network for Microsoft Failover Clustering to work as well. In the end I abandoned the use of NIC Teams for iSCSI and removed the MPIO functionality (seemed to be unnecessary as I didn’t have multiple paths to support it). In my setup this is flagged up as warning in the history of jobs:

Screen Shot 2013-07-22 at 07.18.40

I can’t help feeling if I had more than just my 4 physical NICs I might have had a smoother ride of things…

As you probably gathered my experience has lead to some of these posts being out of sync a little. So here I’ve shown adding Windows Hyper-V to SCVMM before documenting my SCVMM install experiences. That’s coming in the next thrilling episode…

UPDATE:
One thing I noticed about the enablement of the Hyper-V role if its carried out from SCVMM is that it does not install the Hyper-V Management tools, that only happens if you enable Hyper-V from the Windows Server directly. If you want to use Hyper-V Manager, you will need to add in using the “Add Roles and Features Wizard” under the “Features” part of the wizard “Remote Server Administration Tools”, “Remote Administration Tools” and “Hyper-V Management Tools”. You will find in there amongst other operating system still management tools such as those for Active Directory, WSUS, DHCP and so on.