Hyper-V

Not the first time I’ve run into this issue and probably won’t be the last! While building a new Windows Server 2016 (Full) Microsoft Deployment Toolkit server when attempting to run the ‘Update Deployment Share’ wizard I was getting the following error message.

Unable to mount the WIM, so the update process cannot continue.

The solution is simple; if you are running this machine on Hyper-V (presumably other Hypervisors as well) you will need to shutdown the VM, disable Secure Boot (on the VM only) and then power it back on. The next time you run the wizard it will complete as normal.

The error message in full context for reference.

=== Making sure the deployment share has the latest x86 tools ===
=== Making sure the deployment share has the latest x64 tools ===

=== Processing LiteTouchPE (x64) boot image ===

Building requested boot image profile.
Determining if any changes have been made in the boot image configuration.
No existing boot image profile found for platform x64 so a new image will be created.
Calculating hashes for requested content.
Changes have been made, boot image will be updated.
Windows PE WIM C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\amd64\en-us\winpe.wim will be used.
Unable to mount the WIM, so the update process cannot continue.

=== Completed processing platform x64 ===

 

=== Processing complete ===

For the past few months I’ve been using an in house script to manage the rebooting of Virtual Machines on Hyper-V hosts following Windows Updates. These Virtual Machines also take part in Hyper-V Replica Replication to a DR host. On occasion I’ve spotted that when shutting down (as part of the reboot sequence) the Hyper-V Replica state will go into a Error ‘Critical’ state.

As it transpires this happens when the machine is shutting down and Hyper-V replica is attempting to create a reference point to send replica data over to the DR host.

The best fix I have at the moment for this issue is to suspend replication (you can use the Suspend-VMReplication PowerShell Cmdlet as documented here – https://technet.microsoft.com/en-us/itpro/powershell/windows/hyper-v/suspend-vmreplication to accomplish this) before shutting down the machine and then resuming replication (Resume-VMReplication and https://technet.microsoft.com/en-us/itpro/powershell/windows/hyper-v/resume-vmreplication) once shutdown is complete.

You will also note this issue noted under Hyper-V-VMMS in Event Viewer with Event IDs along the lines of 19060, 33680, 32546 and 32026.

1While working on my most recent Hyper-V Replica PowerShell script when attempting to reverse replication from a source Hyper-V host to the a target host using Certificate authentication I was getting the error message…

Hyper-V failed to establish a connection with the Replica server ‘<target hostname>’ on port ‘443’. Error: The connection with the server was terminated abnormally (0x00002EFE).

As it turns out I had recently deleted and created a new certificate for the target host and as such there was no certificate listed in Hyper-V Settings > Replication Configuration. The fix was to set the replacement certificate in the box provided. See the screenshots below for a little more…

The screenshot above shows a portion of the sensor in its default form; in this post I’m going to show how to…

  • Remove the red ‘downtime’ line from the bottom of the chart
  • Set maximum and minimum values of the graph to display 0 to 100%
  • Set the gauge to display its value in GByte instead of MByte

Red Downtime line

Pure aesthetics with this tweak – the memory sensor isn’t something that I would expect to ever encounter downtime (if there server were offline then the PING sensor would pause the memory sensor automatically). The only real application for the downtime sensor would be if WMI wasn’t responding.

Max and Min

PRTG will automatically set the scale for your graphs but I prefer to see the full range of 0 – 100%; this tweak makes that possible.

GByte instead of MByte

When working with servers with small amounts of RAM (lets say 4GB and less) it is typically going to work out best to view free RAM as MBytes but when working with Hyper-V hosts (48GB and 96GB in my case) GBytes are a much better value to work with.

The end result…

Have been having a bit of an interesting issue over the past few weeks whereby our Hyper-V Hosts (Dell T430 Tower Servers) would loose network connectivity at seemingly random intervals; the only resolution was to restart the server or to remove and replace the network cable.

After much investigation looking at the servers and associated network switch we discovered that only the Virtual Switches attached to the on board Broadcom NetXtreme adapters were having issues and that the Intel PCI card NICs were not.

That soon lead onto Microsoft KB 2986895 which relieved a known bug in the drivers for the Broadcom adapters that messed up the Virtual Machine Queues (VMQ) feature of Hyper-V causing a loss in network connectivity. The fix is either to update the driver to a version that does not have the issue or to disable VMQ.

More details can be found in this Microsoft KB… https://support.microsoft.com/en-us/kb/2986895

This entry is part 4 of 4 in the series Microsoft Hyper-V Server 2012 R2 end to end deployment

In this final post we’ll cover the Configuration of Network Settings and setup of Remote Management for a Hyper-V 2012 R2 Server which will be managed from a Windows 10 Enterprise PC.

There are quite a few steps to go through for this part of the configuration of the Hyper-V deployment however a number of these steps can be applied to the servers through Group Policy and thus removing the need to repeat them again.

First up we will configure the management network adapter and domain join the Hyper-V host…

Continue reading

This entry is part 3 of 4 in the series Microsoft Hyper-V Server 2012 R2 end to end deployment

In this post I’ll be going through the installation of Hyper-V Server on our Dell T430 hosts. Remember you can download and use Hyper-V Server 2012 R2 for free (link) however you must still license the guest Operating Systems.

I’ll be configuring a 80GB partition for the OS with the remainder of the storage set aside for the virtual machines – remember this is a UEFI based system so you can have single partitions over 2TB in size (in this case we will have a 2.7TB data partition) on the same disk as the boot partition.

This entry is part 1 of 4 in the series Microsoft Hyper-V Server 2012 R2 end to end deployment

Dell T430s

Time for a new series of posts! In this series I will be looking at the end to end deployment of a pair of Microsoft Hyper-V Server 2012 R2 hosts along with supporting services including networking and backup. This kind of deployment is an excellent option for anyone who is looking to run Virtualisation but without the cost of VMWare or a SAN (Storage Area Network). In this first post I’ll outline the goals of this project along with the hardware I’ll be using.

Goals

  • To configure iDRAC 8 Express for out of band management
  • To install Microsoft Hyper-V Server 2012 R2
  • To configure network settings and enable remote management

A few points to note…

  • Microsoft Hyper-V Server 2012 R2 is completely free! (allowing you to access the latest Hyper-V technology regardless of your licencing level)
  • You must still have a valid licence for any guest operating systems (in this case I am using two Server 2008 R2 Datacentre licences as there will be no VMs using anything higher than Server 2008 R2)
  • Datacentre licencing allows you to run an unlimited number of VMs on that host at that OS level or lower (subject to extra licencing concerns over additional CPU sockets)
  • Hyper-V Server is effectively a super cut down version of Windows Server Core – there are no different drivers and the management tools are just the same
  • You can find out more about Hyper-V Server on TechNet here – https://technet.microsoft.com/en-us/library/hh833684.aspx

Hardware

Dell T430 Hosts

Purchased specifically for this project these two hosts have been configured identically with the aim of N+1 redundancy in the environment.

  • 1x Intel Xeon E5-2620 v3 2.4Ghz 6 core CPU
  • 6x8GB DDR3 2133mhz RAM (48GB total)
  • Dell PERC H730 RAID Controller 1GB Cache
  • 6x600GB 10K SAS Drives
  • Dual Hot Plug Power Supplies
  • 3 Dual Port 1Gbit NICs
  • iDRAC8 Express

These Dell servers really have a lot going for them – as well as being UEFI enabled they come with iDRAC (for out of band management and simple OS installs) as well as plenty of RAM slots, pull out tags on the front with the service tag number, USB 3.0 and hot plug power supplies. Finally I’m really quite impressed with how quiet they run – although they will be housed in a dedicated air conditioned server room I could certainly see one of these as being ok in a well ventilated cupboard somewhere in a branch office like environment.

RPC Server

An interesting quirk of running Virtual Machines for this post… the background is my ‘main work PC’ is currently running Windows 7; in order to remotely manage a Hyper-V Server 2012 R2 machine I had installed Oracle VirtualBox onto my main PC and inside that had setup a Windows 8.1 VM to remotely manage the Hyper-V Server instance.

However after setting up remote management I found that I could connect to all of the remote management tools on my Hyper-V machine with the exception of Disk Management and Hyper-V Manager with the following error message generated in Hyper-V Manager.

RPC Server unavailable. Unable to establish connection between <Hyper-V Host> and <Client PC>.

After much investigation into this issue (and after following a number of dead ends relating to firewall settings, the hosts file and COM security) it transpires that the issue was related to the way that I had setup the network adapter within VirtualBox.

In particular the adapter had been set to NAT mode, now given the properties of NAT it seems plausible that some vital information might have been mangled in the process – if anyone feels like doing some Wireshark on this to discover the cause then please do!

The resolution was simple – setting the adapter to bridged mode instead which allowed the traffic to pass through the virtual adapter just fine.

Its a long shot that many people will have this issue but hey why not!

If you have one of the uber new AMD FX Processors (e.g. the 6 core 6100BE) and have tried to run a Windows Server 2008 R2 virtual machine on it you may have seen this error message.

An error occurred while attempting to start the selected virtual machine(s).

‘<VM name>’ could not initialize.

The virtual machine could not be started because the hypervisor is not running.

The cause of this issue is well documented in Microsoft KB 2568088 (link), the short version is the newer AMD CPUs have a feature called AVX included that Hyper-V does not like very much.

As such you can sort this issue out by running the command bcdedit /set xsavedisable 1 at a elevated command prompt.

Just make sure you restart your PC after and boom you will be able to run your virtual machines again!