Azure Site Recovery with Hyper-V Generation 2 VM Support

The Azure team at Microsoft cannot be denied to run at a very high pace, when it comes to introducing new Azure functionality. Not only do public Azure features show up like clockwork every six weeks, this is also true for feature updates related to Hyper-V and System Center in the private cloud which get updated at an incredible pace. Just a few weeks ago we had to tell customers that if they had boot drives larger than 127GB, more than 1 network adapter, fixed IP address or when they had already adopted Hyper-V Generation 2 VMs, Azure Site Recovery would be a no-go (yet). But customers of the Azure cloud never have to wait very long and they get a very large say in what features are most important for them. Just take a look at the User Voice for Azure Site Recovery:

This week I was lucky to get a time slot from the Azure team to actually test the new Hyper-V Generation 2 support in our closest Azure datacenter West Europe, here in Amsterdam. I had already set up ASR as a preparation for several customers who were planning for Disaster Recovery from their onsite datacenters to Azure. I had more or less been ignoring the product as I didn’t have any use cases or the resources to test. But with interested customers, things can change very quickly. So I quickly configured a small research environment with one Hyper-V host, a few Generation 1 and 2 VMs, SQL Server 2014 and Virtual Machine Manager 2012 R2 with Update Rollup 5.

I will not detail the complete setup and configuration as this has already been well documented. Be careful not to read blogs on ASR older than a couple of months as so much has changed. In this blog I will focus on the new support for Hyper-V  Generation 2 VMs. Generation 2 VMs arrived with Windows Server 2012 R2 Hyper-V and made installing VMs a little faster because none of the ancient devices had to be discovered. Also with the replacement of the VM BIOS by a UEFI, several new features became available:

  • Secure boot
  • DVD Drive hot add/removal
  • PXE boot from synthetic network adapter

So let’s go back to the configuration of ASR for Generation 2 VMs. There are now multiple scenarios for ASR and my configuration is based on “Between an on-premises VMM site and Azure“, but there are several others available, including VMM site to VMM Site (with or without SAN Replication), Hyper-V to Azure (without VMM) or VMware Site to VMware Site (with or without SAN Replication), and we can expect a direct VMware to Azure before long. Because I had configured ASR some time ago, I had to download an update for my registration key. Secondly I had to refresh both the VMM ASR Provider and the Hyper-V ASR Agent for the Hyper-V host. If I had a 7-year old child, I could have delegated this task.

The major steps for protecting VMs with ASR are:

  • Setup
  • Configure
  • Protect
  • Author Recovery Plan
  • Disaster Recovery Drill


Azure Pack – Recover from a disaster

This post is not talking about recovering from a failed data center or Azure Site recovery procedures. No, it’s just “simple” recovery when everything went down for whatever kind of reason. I mean, we are humans and we make mistakes. Even in an environment as Azure they make mistakes. Remember the Storage outage last year? This was by a human making a mistake pushing an update. Sorry… shit happens.

But what if something happened and the power, storage, networking and suddenly the VM’s comes back up and you log in to Azure Pack and an error is on your screen? When we run projects at customers we build from the ground up and “assume” its running forever. In this post I want to discuss the proper way of bring an environment back up and running and what are the critical pointers you need to be aware of.

So let’s start the environment J

First we need to bring up all the domain controllers for each domain where fabric and azure pack components are running. If an ADFS instance is co-located on the domain controller and is using SQL, check after booting the SQL if the services are started correctly.

If Hyper-V servers went down start them first.

The first 2 depend on how you have built your environment. You know about chicken and egg story? When you have one physical domain controller start this first. If you have only virtual domain controllers then first start Hyper-V Server. Just be careful when you have virtual domain controllers and your hyper-v server is joined to the domain that host the domain controller, that it can start the vm when booting the first hyper-v node. You would not be the first where I have seen that the cluster couldn’t start because domain was not up and domain couldn’t be brought up because storage and/or cluster couldn’t start. READ MORE »

VM Usage not updated in Azure Pack

One of our customers was building a billing solution for their Azure Pack environment. We just finished the implementation of the fabric and the configuration of Azure Pack. We started testing and all worked fine. We also configured the integration between SCOM and VMM for usage and configured usage in Azure Pack.

After a couple of weeks we started to implement UR5 for System Center and Azure Pack and all looked good… Until, After a while a developer came to me and said, are you using a VM with 0 cores and 0 GB’s ram and it’s up the whole hour? I said no, that machine is deleted for a couple of days. Then the panic started. What happened? How can it be the usage is off from VMM?

I had it once before that VM’s are not updated correctly in SCOM and quickly took a look at the Virtual Machine dashboard in SCOM. And yup, there were still VM’s that were deleted a while ago. Then I connected back to VMM and did a refresh of the Operation Manager Connector. A Completed w/ Info appeared. The error message:

When I tried to run that command VMM PoSh didn’t know the cmdlet. I know that VMM keeps all management pack updates in its installation directory so I logged on to the VMM server and went to the installation directory of VMM. There is a folder called ManagementPacks. I noticed that the date modified was related to the RU 5 installation release date:

BYO-DSC with VM Roles (A VM DSC Extension alternative for WAPack)


The Azure Pack IaaS solution is awesome, we can provide our tenants with a lot of pre-baked solutions in the form of VM Roles.
Tenant users can deploy these solutions without needing the knowledge on how to build the solutions themselves which is a great value add.

But not all customers want pre-baked solutions. Some customers will want to bring their own configurations / solutions with them and they don’t want your pre-baked stuff for multiple reasons (e.g. don’t comply with their standards or requirements). In Azure these customers can make use of the VM extensions. One of the missing pieces of tech in the Azure Pack / SCVMM IaaS solution. It is at this time, very difficult to empower tenant users to bring their own stuff.

In Azure we have a lot of VM extensions available, today I’m going to implement functionality in a VM Role which will behave similarly to the Azure DSC extension (As you probably know by now, I like DSC a lot).

Please note! The implementation will serve as a working example on how you could do this. If you have any questions, please ask them but I will not support the VM Role itself.


A tenant user wants to deploy configurations to their VMs themselves. As a configuration mechanism, the tenant user has chosen Desired State Configuration (DSC). If by any means possible, they want a similar approach in Azure as on your Azure Pack service.

In Azure you can zip your PowerShell script containing your DSC configuration together with the DSC resources it requires. This archive is then uploaded to your Azure Blob storage. The VM DSC Extension picks this archive file up, unpacks it and runs the configuration in the script to generate the MOF file. During this procedure the extension will take in configuration data and take user provided arguments into account.

Check out if you are seeking to use DSC in Azure with the VM DSC Extension.

In our Azure Pack VM Role implementation we will try to mimic all this by letting the tenant user zip up its configuration script together with the DSC resources in the same way as they are used to with the Azure DSC extension. In fact, we will use the Azure PowerShell module to do this. Then, because we don’t have blob storage in our implementation (yet), we assume the tenant user has a web server in place where the tenant user will make this zip file available. On this same location a PSD1 file could be published containing DSC configuration data. Also, the VM Role will take arguments as well.

Prepare a configuration

First let’s create a configuration archive (ZIP file) and a configuration data file (PSD1 file). Then we will stage everything on a web server.

First Create a configuration script for demo purposes.

configuration MyNewDomain
    $secpasswd = ConvertTo-SecureString $SafeModePassword -AsPlainText -Force
    $SafemodeAdminCred = New-Object System.Management.Automation.PSCredential ("TempAccount", $secpasswd)
    Import-DscResource -ModuleName xActiveDirectory
    node localhost
        WindowsFeature ADDS
            Name =  'AD-Domain-Services'
            Ensure = 'Present'
        xADDomain MyNewDomain
            DomainName = $DomainName
            SafemodeAdministratorPassword = $SafemodeAdminCred
            DomainAdministratorCredential = $SafemodeAdminCred # used to check if domain already exists. Domain Administrator will have password of local administrator
            DependsOn = '[WindowsFeature]ADDS'

The configuration will produce a MOF file which will make sure that the AD-Domain-Services role is installed and then promote the VM to be the first domain controller of a new domain. The domain name and safe mode password are defined as a parameter and thus can be user configurable at MOF file generation time. The password is taken as a string and then converted to a PowerShell credential object. The configuration needs the xActiveDirectory DSC Module and therefore, this module must be packaged up with the script.


Presentations Theme Night “Hybrid Cloud”


Last week SCUG.NL and organized their first coordinated Theme Night focusing on “Hybrid Cloud” hosted by Ordina. The turnout was great with about 100 attendees who enjoyed their afternoon and evening with interesting content focusing on several Hybrid Cloud topics. Ben Gelens, Mark Scholman and Darryl van der Peijl gave in depth presentations about deploying VM Role with Desired State Configuration (DSC), Hyper-V Network Virtualization and Site2Site VPN Gateways between on-premise and Azure.

If you’d like to take another look at their presentation slides, you can download them here.



Mark Scholman and Ben Gelens preparing their presentations


Ben Gelens goes into depth about DSC and VM Role


Mark Scholman explaining Hyper-V Network Virtualization


Darryl van der Peijl concentrating on his demo


The three Dutch Hyper-V MVPs Hans Vredevoort, Darryl van der Peijl and Ronald Beekelaar together during Theme Night

Update Emulex VMQ Issue: They finally did it!

Another update from the VMQ front, this time with the formal announcement of a declaration of peace.

After a very long struggle – which started in November 2013, not long after the release of Windows Server 2012 R2 – I dare now confirm that VMQ can be re-enabled on Windows 2012 R2 Hyper-V hosts with HP/Emulex 554FLB host bus adapters.


If you have followed along, you will know that we have published several blogs on discussing the matter. We initially discovered the problem in our own lab environment, then explained how to avoid the network disconnections by disabling VMQ, followed by how we tested the numerous HP/Emulex firmware and driver updates and about our interactions with Microsoft, Emulex and to a lesser extent HP. Many customers who were using the HP/Emulex, but also many other using OEM versions, were badly hit by the networking problems.  The problem with VMQ became well known all over the world, not to say notoriously known.

Several weeks ago I was able to test a combination of new HP/Emulex firmware and drivers (February 2015), combined with a private Microsoft hotfix (The KB for the final hotfix is KB3031598), provided by the Microsoft Networking team in Redmond. We already saw promising results using older versions, but there were a couple of remaining issues which had to be solved in the Windows NIC Teaming (LBFO) stack. One issue that I kept seeing was that when there were more VMs running on a particular host in the cluster than there were VMQs available, things started to collapse. Not only were the additional VMs not getting a VMQ Queue, they would also not get assigned to the default VMQ Queue. Other VMs that were already assigned to a VMQ Queue, were disconnected and it took some time for this problem to settle and make connections possible again.

Using the latest firmware/drivers with the Microsoft LBFO hotfix solves that problem (amongst others).

It is time to say that we have the worst behind us.


Sold Out First “Theme Night” with eBook Giveaways”

The first “Theme Night” with focus on hybrid cloud organized by and on March 25th already sold out 2 weeks ago.

Today we can announce that both Pack Publishing and Apress accepted our request for a couple of free ebooks:


Hyper-V Network Virtualization Cookbook  by Ryan Boud


Windows PowerShell Desired State Configuration Revealed  by Ravikanth Chaganti


Both books are a perfect match for what this first Theme Night has to offer and we thank both Ravi and Ryan as well as their publishers for making this possible.

Much appreciated guys!

Scale-Out File Server – Symmetric and Asymmetric storage

Scale-Out File Server (SOFS) is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage such as Hyper-V. Scale-out file shares provide the ability to share the same folder from multiple nodes of the same cluster.
In this blog we assume you already have played around with SOFS and know the basics.

There are multiple ways to connect storage to your SOFS cluster.
The most common way today is putting your SOFS cluster in front of an iSCSI or FC SAN, the upcoming method is using Storage spaces in combination with a “Just a bunch of disks” device also known as JBOD. We will cover them both in this blogpost.

Symmetric Storage

If the storage is equally accessible from every node in a cluster, it is referred to as symmetric storage.
Each node can take ownership of the storage in case of maintenance or failures which provides availability.
With symmetric storage, read and writes operations can be done by every node in the cluster (also referred to as “Direct IO”) however metadata operations must be done by the owner node which is orchestrating these operations.


Example of Symmetric storage:
SOFS Symmetric Storage

Integrating VM Role with Desired State Configuration Part 10 – Closing notes

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes

So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic.  The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents.  Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

At this time we cannot lock out the VM controls from the tenant user or manipulate the provisioning state. I added my feedback for more control at the WAPack feedback site here:

Bugs / issues

While developing this series I faced some issues / bugs which I logged at connect:

What’s next?

First I will do a speaker session at the SCUG/ event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at!

Integrating VM Role with Desired State Configuration Part 9 – Create a Domain Join DSC resource

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

I created a DSC resource to handle the domain joining using the new Class based definition method which was at the time defined as “experimental design” (see for more info). I did this because this is the way forward for authoring DSC resources and I find the experience building the resource to be a lot easier in this fashion.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here:

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.

If you want more insights on why it is easier using class defined resources opposed to the WMF 4 introduced ‘script module’ resources, I advise you to look at an excellent blog post created by Ravikanth Chaganti here: If you want to know more about the PowerShell Class implementation in general you should definitely have a Beer with Trevor Sullivan here:

I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).



As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).


Because WMF 5 is not RTM yet, I think the behavior of the WebDownloadManager will be adjusted in a later version. In the mean time I have logged this behavior on connect here:

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.