Win free tickets for Experts Live 2014!

Only forty days to go for the next Experts Live event! This year we have an amazing line up again with tracks about:

  • Windows Server
  • Azure
  • Hyper-V
  • System Center
  • Office 365
  • PowerShell
  • SQL

More than forty sessions! This time the event will be hosted in Cinemec in Ede.

We from may give away two tickets that gives you free entrance to this event. If you want to win one of these tickets you have to write a tweet explaining why anyone should attend Experts Live using the hashtag #Hypervnu #ExpertsLive. The tweet must be posted between October 8th and 15th.

Designing VM Roles with SMA in mind

Blog by Ben Gelens who blogs at 

SMA Runbooks can be triggered by Windows Azure Pack events like VM Role creation. You can configure this by going to the VM Cloud automation tab and linking a runbook to an Azure Pack known object like “Microsoft VMRole“.

The objects have actionable event like Create, Scale, Repair and Delete. You can link one runbook per actionable event per object.

A runbook is only available to linkup with an event if it is configured with the SPF tag.

When taking this information in mind, you can see that if you are going to build a dependency between actionable events and SMA runbooks you need to develop and commit to some standards. In this blog post I’ll show you some examples of what you could think off while developing your SMA Windows Azure Pack integration strategy for VM Roles.

Background information

In my daily job I see VM Roles being used by enterprise customers as a replacement for SCVMM service templates and VM templates. Although the VM Role is meant to be disposable, scalable, recyclable and easily replaceable, they are actually being used for VMs which will have long life spans and take advantage of some of the VM Role benefits like scaling and resource extension configuration.

Hyper-V Configuration Toolkit

Guest blog by Mark Scholman

I have been working on a new script project to configure Hyper-V hosts. Every time I deploy new Hyper-V servers in a new customer environment, there are many things that require configuration. Also little things that are easily forgotten. Let’s see how we can automate that process. Of course I realize that Bare Metal Deployment is an available functionality in VMM. But with the chicken and egg story or customers who cannot use Bare Metal Deployment, I decided to create this tool. It is using the converged network setup as described in this blog post. What it does is the following:

  • Rename Adapters
  • Create Teams
  • Create Team Nics (tNics)
  • Set Network Configuration (MGT,LM,CSV)
  • Join Server to the Domain
  • Create a server-local administrators group in the domain
  • Allows you to create a new or join an existing cluster
  • Configure Cluster network names
  • Configure Cluster Live Migration subnet

On the to-do list is the following and will be added with upcoming releases:

  • Configure Storage network (iSCSI & SMB3)
  • Use of different topologies for converged networking as described here
  • Using JEA (Just Enough Admin) or PSCustomSessionConfiguration for deployment of Hyper-V hosts

How to use the toolkit

Download the latest version from the Technet Gallery. If you feel like to help me optimize and extend the tool fork it on github.

On the newly provisioned Hyper-V Server start the Deploy-HyperVHost.ps1. On the Configure Nic’s tab select the adapters you want to use for Management (MGT / LM / CSV) and click Set Management Adapters:

Notice the list box will refresh with the new names for the adapters. Next select the adapters you want to use for VM Network and click Set VMNet Adapters


Hyper-V Networking NVGRE do’s and don’ts

This blog is written by Mark Scholman (@markscholman).

This post will explain the do’s and don’ts of Hyper-V Network virtualization. Especially on the topic where we want to bring our next solution / example to Microsoft Azure Pack and / or System Center Virtual Machine Manager. First things first. For those who need to understand the basics on Hyper-V Network Virtualization I recommend to start reading the article here.

This blogpost is based on the following use case:

A customer wants to host their infrastructure at a Service Provider. The Service Provider utilizes Hyper-V Network virtualization, management with System Center Virtual Machine Manager and optionally Windows Azure Pack. The customer currently has the following networks:

  • Production Network
  • DMZ Network

The customer prefers to bring their own Linux firewall and use it as the default gateway for their networks. The customer network consists of the following subnets:

Production subnet:
DMZ Network

As for each subnet the first possible IP address (normally x.x.x.1) is automatically provisioned as the default gateway. The FW/Gateway (MS-TEST-A01) is configured with x.x.x.254 for each NIC. The default gateway in the firewall is set to and this VNet is enabled with Internet connection and NAT.


Not really an exciting network configuration you might think. We will change the default gateway in each machine to the x.x.x.254 (ip of the virtual fw).

The following image displays two provisioned virtual networks. VNET-A is configured with a gateway:


The virtual machines use the following IP configuration:


Bare Metal Post-Deployment – Running the Post-Deployment Automation and Bonus – Part 5

Earlier this week I introduced Ben Gelens and his blog series on Hyper-V Bare Metal Post-Deployment. Here is part 5, a continuation on the topic of Constructing the Post-Deployment automation. This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario
  2. Part 2: Pre-Conditions
  3. Part 3: Constructing the Post-Deployment automation
  4. Part 4: Constructing the Post-Deployment automation continued
  5. Part 5: Running the Post-Deployment automation and Bonus (this blog post)


Running the Post-Deployment automation

So you have arrived at the last blog post in this series. Hope you have enjoyed everything you have read and must importantly have learned what you were seeking to learn. As a little extra, I’ve put in a little bonus script to finalize a Hyper-V cluster configuration.

When everything is in place we simply run the “Run-HyperVPostdeployment” runbook.

Output of child runbooks is returned to the master runbook by using return statements and sending them back as strings. You only have to check the job summery of the master runbook to get a view of how things went.

When things didn’t go well, you can see at which stage the failure occurred and start troubleshooting from there. Check the VMM job log to get more detailed information.

I didn’t include cleanup steps but it’s really easy to restart the process by removing all logical switches from the host and just restart the master Runbook.

Bonus Post Cluster Deployment Configuration script

Run the following script on one of the Cluster nodes (this has to run only once per cluster). The script will:

  • Configure the cluster to register its PTR records into the reverse lookup zone.
  • Rename the cluster networks
  • Configure the correct cluster network metrics
  • Configure the networks allowed for live migration
  • Configure 512MB RAM as CSV block cache
  • Configure the SameSubnetThreshold for 20 seconds
  • Configure the cluster service shutdown timeout to 30 minutes
  • Renames the CSV mount point to reflect the volume name
  • Remove the quorum drive letter

Please find a link to all script files here.

Bare Metal Post-Deployment – Constructing the Post Deployment Automation (continued) Part 4

Earlier this week I introduced Ben Gelens and his blog series on Hyper-V Bare Metal Post-Deployment. Here is part 4, a continuation on the topic of Constructing the Post-Deployment automation.

This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario
  2. Part 2: Pre-Conditions
  3. Part 3: Constructing the Post-Deployment automation
  4. Part 4: Constructing the Post-Deployment automation continued (this blog post
  5. Part 5: Running the Post-Deployment automation and Bonus


Constructing the Post-Deployment automation (continued)


This blog post will describe the model and configuration specific part of the automation which is called upon the master runbook described at the previous post.

Child Runbook “Config-BL460CGen8″.

Config-BL460CGen8 runbook is called as a child runbook when the Hyper-V host involved is a BL460C Gen8 blade. The runbook is not started inline but is started with its own job by using the Start-SmaRunbook command (my preferred method). Differentiation between configurations from different environments are made through the Type parameter.

The runbook will put the host through the following process:

  • Implement Fully Converged networking including MAC address fix (bug in SCVMM is circumvented).
    • Create the logical Infra switch on a designated Infra team NIC which is not used for management.
    • Run bl460c_converged.ps1 on the host (for more details see the script below).
    • Refreshes SCVMM host information.
  • Creates the VM logical switch on designated VM team NICs.
  • Add LM and CSV vNICs to Infra switch.
  • Configures live migration settings.
  • Run Hostdeploy scripts from the Hostdeploy custom resource (for more details see the script section).
    • Postdeploy.ps1
    • BL460_npiv.ps1
      • Reboots the host for NPIV to become available
    • BL460_vmsan.ps1
    • BL460_xxxx_vmq.ps1

The script will return a Success statement to the master runbook when the entire child runbook has run successfully. If something goes wrong during the child runbook process, the process is terminated for the hosts and a Failed at stage … statement is returned to the master runbook (effectively terminating the process entirely for the host).


Bare Metal Post-Deployment – Constructing the Post-Deployment Automation – Part 3

Earlier this week I introduced Ben Gelens and his blog series on Hyper-V Bare Metal Post-Deployment. Here is part 3 on the topic of Constructing the Post-Deployment automation.

This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario
  2. Part 2: Pre-Conditions
  3. Part 3: Constructing the Post-Deployment automation (this blog post)
  4. Part 4: Constructing the Post-Deployment automation continued
  5. Part 5: Running the Post-Deployment automation and Bonus


Constructing the Post-Deployment automation

To make the process extensible we use a master runbook to determine the common tasks that need to be executed on all hosts and decide where to call which child runbook when tasks become specific. When a custom resource script is run, the script is described after the runbook.

This chapter will be split into 2 blog posts. Let’s start with the master runbook.

Master Runbook “Run-HyperVPostDeployment”.

At first the runbook will acquire the credentials and FQDN needed to connect to the SCVMM environment. Then it will pass the data to the Get-HyperVHosts child runbook and will ask it to deliver Hyper-V hosts which are ready for post deployment. For every host which is returned, a connectivity check is performed against WinRM. When a Host is responsive, it will be placed in maintenance mode and the hardware type will be queried from it to be stored together with the SCVMM host object in memory. Every non-active host is filtered out. All active hosts will then run the following process in parallel (2 at a time because of the throttlelimit set at the $throttlelimit variable, 5 is the maximum limitation of workflow):

  • Group policy update (make sure all GPO firewall rules are applied)
  • Filtering of hardware type and environment
  • Processing specific child runbook (if ran successfully continue, else break for Host)
  • Perform the NIC registry fix
  • Switch off maintenance mode
  • Marked “Finished” with post deployment (Post Deployment Status custom property = “Finished”)


Side note
Inlinescript sections share the same session which is why you only have to import the SCVMM PowerShell module once. For more info see about_InlineScript.

Custom Resource: Nic_registryfix.ps1

Nic_registryfix.ps1 will run on all hosts and it will handle the following tasks:

  • Query for all VM Switches
  • Lookup the NIC on which a switch is bound
  • If the NIC is an team multiplexor NIC, lookup all team member adapters
  • For all NICs found, add registry items to disable DNS dynamic update and DHCP.


Child Runbook “Get-HyperVHosts”.

Get-HyperVHosts runbook queries SCVMM for all Hyper-V hosts and either checks if the “Post Deployment Status” custom attribute is empty (-ReadyForPostDeploy $true) or has the value provided by the invoker (e.g. -PostDeploymentStatus “Finished”). If the non-mandatory parameters are omitted, it will return all Hyper-V host objects. This runbook can be called directly or as a child runbook. In this case it will be called directly with the inline method (nested).


I’ll be using multiple methods to show the possibilities in SMA. For more info see:


Bare Metal Post-Deployment – Pre-Conditions Part 2

Earlier this week I introduced Ben Gelens and his blog series on Hyper-V Bare Metal Post-Deployment. Here is part 2 describing the pre-conditions.

This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario
  2. Part 2: Pre-Conditions  (this blog post)
  3. Part 3: Constructing the Post-Deployment automation
  4. Part 4: Constructing the Post-Deployment automation continued
  5. Part 5: Running the Post-Deployment automation and Bonus



In this blog post all fundamentals will be put into place so we have some hooks and workable items for the automation to utilize.

I’ll assume you have a working SCVMM and SMA environment and are already able to perform the bare metal deployment process itself. For an excellent guide on how to start with SMA, download the SMA Whitepaper written by MVP Michael Rueefli. I also assume Windows Azure Pack is in place to front-end SMA.

You have to install the SCVMM Console on the SMA Runbook Worker for the described runbooks to work.

Host Groups

Because one server model can of course be utilized for multiple environments (e.g. Production and Test), I have configured multiple hosts groups for these servers. The host groups are used as a filter mechanism for the SMA runbooks to apply different configurations, if any exist (e.g. uplink port profile or differences in hardware resources).

Hyper-V Host Custom Property

To determine which Hyper-V hosts are subject to a post deployment process, a Custom Property is implemented and bound to the Hyper-V host object. In this case we define that if this property on a host has no value, the post deployment process will run against that.

To create a custom property in SCVMM, run the following cmdlet:|

This host has finished its post deployment process and won’t be part of the next cycle.

Custom Resource

Post-deployment tasks are run through a series of scripts, coordinated by an SMA runbook, which are executed on the Hyper-V hosts. SCVMM will be used to deploy and start the scripts. The scripts and supporting utilities are stored in an SCVMM custom resource. To create a custom resource simply create a new folder in your SCVMM library and name it with a “.cr” extension (e.g., then refresh your library.


During post deployment, HP Smart Update Manager (HPSUM) will be invoked to install drivers and additional HP software. HPSUM and packages must be present on a file server share (a later described custom resource script will invoke HPSUM). Download the latest (Service Pack for ProLiant) (SPP) and extract the content of the swpackages folder to the share.

During the post-deployment process, the content of the share is copied to the Hyper-V host entirely so HPSUM can be executed with a local repository (earlier versions ran well with a repository on a share, the newer versions unfortunately do not). You can remove the Virtual connect firmware from the HPSUM share, this will reduce the amount of data which need to be copied dramatically.

Download the latest version of “OneCommand Fibre Channel and Converged Network Adapter Configuration Utility” from the website and add the CP package to the HPSUM share. This utility will be used by “BL460_npiv.ps1″ (custom resource script) to enable NPIV support.

SMA Variables and Stored Credentials

For SMA to interface with other components, some variables and credentials need to be created as an SMA Asset.

Asset Name Asset Type Notes
SMA SCVMM Service Account Credential Domain Account.
Member of SCVMM Administrator Role
VMMServer String Variable FQDN of VMM Server
SMAWebServer String Variable FQDN of SMA Web Administration Server


The SMA Runbook worker account must be an SMA Administrator to start child runbooks via the SMA web service. This can be done either by making the service account a member of the Active Directory group specified during installation or as a member of the smaAdminGroup local group on the SMA web server. You can of course also create a credential asset and adjust the runbook to use these credentials when starting the child runbook which would probably be better.

SCVMM Stored Credentials

For SCVMM to run scripts on Hyper-V hosts, an SCVMM Run As account with local administrator rights on the Hyper-V hosts needs to be in place.

Run As Account Name Notes
SCVMM Admin Run As Account Domain Account.

Member of Local Administrators Group on Hyper-V hosts.

Rename Uplink Port Profile Sets

When you configure a Logical Switch with an uplink port, an uplink port profile set is created. The name is based on the uplink port profile display name appended by a GUID. Since we need to specify the uplink port profile set to use when we implement the logical switch on the Hyper-V host, and the GUID will complicate this a little bit, we will strip the GUID off. This is not a mandatory pre-condition but helpful for this blog.

To do this for all uplink sets run:


To do this for all uplink sets associated with a logical switch run:



Bare Metal Post-Deployment – Introduction and Scenario Part 1

Allow me to introduce Ben Gelens, who has just joined INOVATIV as a consultant focusing on Windows Server, Hyper-V, Azure, Cloud and Automation. Ben got my attention after writing several superb blogs on Virtual Machine Manager at:

Several weeks ago Ben sent me a huge blog on Hyper-V Bare Metal Post-Deployment which I reviewed and advised to split into a number of smaller blogs. Ben kindly accepted my wish to publish his blog series on, so please enjoy!



In this series of blog posts I will describe a method for Hyper-V host Bare Metal Post-deployment which I developed and have been using in a production environment. This method of course is just one example of a viable solution and it is not per se THE method. My intention is that you can gather enough information from these posts to develop your own method.

This method uses:

  • SCVMM Custom Resource containing PowerShell configuration scripts and supporting executables (a.k.a. Generic Command Execution, GCE).
  • Service Management Automation (SMA) runbooks to orchestrate the post installation process and parallelize the process across multiple hosts.

When this blog post series is done, the entire post deployment sequence can be fully automated and made available at the push of ONE button!


When I just started working with the bare metal deployment features of SCVMM 2012, I soon found this blog written by Mike DeLuca. My work started based on this blog post so many credits go here!

These blog posts are not intended to be a PowerShell or SMA course. I will assume you are able to read and interpret the PowerShell lines provided and will not go into detail what every line does. Instead I’ll briefly describe what happens when a script or runbook is executed.

This blog post series consists of 5 parts:

  1. Part 1: Introduction and Scenario (this blog post)
  2. Part 2: Pre-Conditions
  3. Part 3: Constructing the Post-Deployment automation
  4. Part 4: Constructing the Post-Deployment automation continued
  5. Part 5: Running the Post-Deployment automation and Bonus

So that’s it for the introduction, let’s move on!


For this blog post I have 2 environments, Production and Test, with the same hardware (HP BL460c Gen8). Servers have been bare metal deployed with Windows Server 2012 R2.

All blades are equipped with 2 Emulex FlexibleLOMs I/O cards (2x 40Gb/s full duplex). Let’s elaborate a little bit so you have a better understanding (what I explain here is the 554 family of FlexibleLOM cards, other cards can have different options).

A FlexibleLOM I/O card can be divided into multiple FlexNICs and a FlexHBA (Host Bus Adapter). FlexibleLOM cards have 2 ports each mapped to a respective interconnect bay port (FlexFabric or Flex10). The cards can have a maximum of 4 devices per port (3 FlexNICs and 1 FlexHBA per port). The FlexHBA can be either an FCOE or an iSCSI HBA. In this case the FLB card (FlexibleLOM for Blade, mandatory card installed at manufacturing) will have 1 FCOE FlexHBA per port to facilitate FCOE connectivity for the CSV and Quorum volumes and virtual Fibre Channel Switches (2Gbps per port is still unassigned which can be used for other means, e.g. iSCSI VM switches). The Mezzanine card will have 2 FlexNICs, 1 for infrastructure connectivity per port and 1 for VM connectivity per port. Bandwidth for all FlexNICs and FlexHBAs is assigned with minimum guaranteed values but is allowed full capacity when this is available and required. At the Windows level, all NICs are 10Gbps which is a behavioral change introduced in Virtual Connect firmware 4.01 (at firmware levels before 4.01 the bandwidth was assigned statically as a maximum value and was reported to Windows as well).

Windows is presented with 10Gbps adapters. Because of this VMQ is enabled the moment a VM Switch is attached to an adapter.


Because port 1 and port 2 of each adapter breaks out to a different interconnect bay which on their turn break out to a different core switch and / or Fibre fabric, full path redundancy can be achieved by teaming the FlexNICs at the OS level and by using MPIO for the Fibre connections.


Windows Azure Pack Tenant Public API new cmdlets

Two months ago I published a blog on the Windows Azure Pack Tenant Public API. This API allows you to interact with your cloud services using PowerShell over the internet and certificate authentication. The Microsoft Azure PowerShell module provided cmdlets for Windows Azure Pack as well. As you might remember from that blog was the lack of VM Role cmdlets. There was a workaround that worked but was somewhat complex to configure and maintain.

A new version of the Microsoft Azure PowerShell module has been released. This new version also contains various new cmdlets  for Windows Azure Pack.

  • New-WAPackCloudService
  • Get-WAPackCloudService
  • Remove-WAPackCloudService
  • New-WAPackVMRole
  • Get-WAPackVMRole
  • Set-WAPackVMRole
  • Remove-WAPackVMRole
  • New-WAPackVNet
  • Remove-WAPackVNet
  • New-WAPackVMSubnet
  • Get-WAPackVMSubnet
  • Remove-WAPackVMSubnet
  • New-WAPackStaticIPAddressPool
  • Get-WAPackStaticIPAddressPool
  • Remove-WAPackStaticIPAddressPool
  • Get-WAPackLogicalNetwork

As you can see it also contains new cmdlets for interacting with cloud services and the VM Role.

You can download Microsoft Azure PowerShell module 0.8.6 through the Web Platform Installer with this link.

The VM Role is a custom configuration that can consist of many required and optional fields. As with the GUI wizard some values must be provided for the PowerShell cmdlet. Creating a new VM Role with the New-WAPackVMRole cmdlet requires some input.


If we take a look at the ResourceDefinition of an existing VM Role there is still some configuration requirement, but it is a huge improvement compared the previous procedure.


Our Sponsors

Powered by