Microsoft Cloud Platform

Quickly Checking the Azure Pack Version

Updates for Windows Azure Pack are offered on a quarterly basis. These so-called Update Rollups offer bug fixes and functionality enhancements. The latest version is UR4 and the next one is due February 10th, 2015.

When installing Azure Pack via the Web Platform Installer, you are sure to get the latest version. Likewise if you use the downloader.ps1 script used by PowerShell Deployment Toolkit for offline installation of WAP, you also get the latest version.

If WAP was installed some time ago, you can quickly find the version by pressing CTRL+SHIFT+A


This data can also be copied to your clipboard in json format:

{"items":[{"key":"UTC Time","value":"2015-01-20 13:20:25Z"},{"key":"Browser","value":"Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; Touch; .NET4.0E; .NET4.0C; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729; rv:11.0) like Gecko"},{"key":"Language","value":"en-us"},{"key":"Portal Version","value":"3.19.8196.21 (rd_auxsmp_stable_v2_gdr.141021-1031)"},{"key":"PageRequestId","value":"fb2cc9e9-83c5-4d20-950a-1933da326ca6"},{"key":"Email Address","value":"DOMAIN\\Administrator ({1})"},{"key":"Subscriptions","value":""}],"errors":""}


You can register here fore the Windows Azure Pack Community newsletter which gives you early access to new features in Update Rollups.

Hyper-V NVGRE Gateway Toolkit

Hyper-V NVGRE Gateways are used to allow Virtual Networks in Hyper-V to connect to the internet or establish a VPN connection to the tenant’s on premise environment. An NVGRE Gateway Cluster is configured as a 2 node cluster. You can download a Service Template in VMM to deploy the Gateway’s as a service tier. Personally I experienced a lot of issues using the service template and the lack of Generation 2 support in Virtual Machine Manager (VMM) Service Templates. I have spent some time on creating a PowerShell tool to accomplish this. In this post we are going through the complete deployment and migration of a Hyper-V NVGRE Gateway. In order to use the script we need to have the following components in place:

  • Dedicated Hyper-V Host Cluster with shared storage. Best practice is to deploy these host in a separate (HNV) domain and a separate management network for the Hyper-V host and the Virtual Machines.(not required for this toolkit)
  • Connectivity from the machine where you run this script to the VMM Server, the dedicated Hyper-V Cluster and the management network where the Gateway VM’s will be deployed to.
  • A VMM Template with OS settings configured to join the Virtual Machine in the domain where the Hyper-V hosts are joined. It is required to have the Hyper-V hosts and the Gateway Virtual Machines in the same domain.
  • IP Pool for Internet and (HNV) Management network.
  • A Run as Account configured in VMM that will be used as run as account to add the Gateway Cluster to VMM.(This is a domain user)
  • A Run as Account configured in VMM that will be used for the Local Administrator credentials on the Gateway VM’s we are deploying.
  • Name resolution between Management domain and NV HNV domain.

Download the script from the TechNet Script library and run it in PowerShell. Enter your VMM server or cluster name and click on Connect:


Azure Pack Wiki Available Via OneNote

Hi Azure Pack fans!


In the past 2 years or so, we have acquired a lot of experience with Azure Pack. We created a Wiki which was intended to collect the scattered locations of all the blogs, videos, articles and whitepapers. But as we observed, this wiki has also stimulated the growth of content, as how cool is it to be on this now famous Azure Pack Wiki <grin>.


Many features you will first see in Microsoft’s public cloud, will gradually become available on premise. I have added several new sections relevant to both public Azure and private Azure.


The Wiki has grown to such extreme length that the TechNet Wiki was no longer suitable and publishing to it became a nightmare.


For this reason I make the Azure Pack Wiki available in OneNote Online.  If you edit the Wiki in your local version of Microsoft OneNote, you can also use the hyperlinks.




You can find the latest update of the Azure Pack Wiki here for viewing and here for editing if you want to add content. Please add NEW behind the title of the added content.



Hope you like it!



Best regards,


Hans Vredevoort

Hyper-V MVP

Twitter: @hvredevoort



Marc van Eijk

Windows Azure MVP

Twitter: @_marcvaneijk

Review of Hyper-V Network Virtualization Cookbook


Two weeks ago on Twitter, Ryan Boud who is  a senior cloud consultant working for Inframon, asked me if I would be interested in reviewing his book on Hyper-V Network Virtualization (HNV). Sometimes things can go very quickly. About one minute after his tweet, I confirmed my interest, not knowing this was in fact a book published by Packt.

As a side note, I’m personally not very fond of this publisher after a bad experience with them in their rather aggressive approach to candidate writers and quickly learnt they don’t pay very well for the vast amount of work authors put into their writings. As a technical editor for several books by Aidan Finn et al., and having contributed to a book called Microsoft Private Cloud Computing, I had gotten used to being paid for my work. Packt does not pay a cent to technical editors or reviewers, so that’s why I had previously declined my participation on another book.



But as you can see in the tweets above, I had already accepted the review and quickly received an email from Packt, offering me a log in account to their site and a free download of Ryan Boud’s book in pdf, epub, and mobi formats. So I decided to download the PDF and see what this book had in store. Of course I’m interested in anything written on Hyper-V Network Virtualization as it directly relates to the many CloudOS, System Center and Azure Pack projects I am involved in with different service providers and enterprises in Europe.

Hyper-V Network Virtualization was in fact introduced in Windows Server 2012 but was not very useful without a NVGRE gateway. We had to wait until Windows Server 2012 R2 and VMM 2012 R2 came out with all the necessary ingredients to let this baby live in the real world. Microsoft had decided to make an in-box gateway available, which could be implemented on a dedicated Hyper-V server or preferably an HNV cluster. By deploying a VMM Service Template, a single or high available NVGRE Gateway guest cluster could be deployed and managed through VMM. READ MORE »

Authentication Error has Occurred (Code : 0×607) on Remote Console After you Apply UR4 for SPF 2012 R2

In THIS BLOG Marc van Eijk showed us the steps to configure the web.config file for Remote Console VMConnectHostIdentificationMode to set to IP.
With UR4 for SPF 2012 R2 the SPF team made some changes and this will result in authentication problems and you will receive the following error message:



Our friend Stanislav Zhelyazkov already blogged about this on his blog, but we think its proper to also mention the solution on as well.


To overcome this problem we need to adjust the rewrite rules in the web.config file we have created earlier.
We’ll have to change the following line:

<action type=”Rewrite” value=”negotiate security layer:i:1 &#xD;&#xA;authentication level:i:0″ />


<action type=”Rewrite” value=”authentication level:i:0″ />

The “outboundRules” part in the web.config will look like this:

<clear />
<rule name=”Remote Console on IP Address” preCondition=”VMConsole RDP File” enabled=”true”>
<match filterByTags=”None” pattern=”negotiate security layer:i:1″ />
<conditions logicalGrouping=”MatchAll” trackAllCaptures=”true” />
<action type=”Rewrite” value=”authentication level:i:0″ />
<preCondition name=”VMConsole RDP File”>
<add input=”{REQUEST_URI}” pattern=”^.*(/VMConnection)$” />
<add input=”{RESPONSE_CONTENT_TYPE}” pattern=”^application/x-rdp$” />

If you have any questions, leave a comment!

Darryl van der Peijl



Azure Pack Powershell broken?

I was struggling with the PowerShell API for Azure Pack. I imported the publishsettingsfile as described in this post. In the past when I wanted to get all the VM’s I could simply run:

And then

And that would give me all my VM’s running on the Azure Pack Subscription. But now it gives me “The remote server returned an error: (403) Forbidden.”:

Looks like since the last update of the Azure Powershell there seems to be some changes.

Let’s start to make it work:

You need to know the API URL for your Azure Pack environment and the publishsettings url.

When you run Get-WAPackEnvironment you see the 2 Azure Environments from the public Azure and Azure China. We need to add our environment (WAPack Environment) there.

Let’s start by adding a WAPack Environment:

Now we need to download the publishsettings file from the AzurePack Environment.(if you have any older subscriptions left in your machine i recommend to clean up the certificates (certmgr.msc) and delete the subscription (Remove-WAPackSubscription)):

This command will open up the portal to download the publishsettings file:

Next we run this command to import the settings into our system. Be aware that the -Environment is specified and attached to the environment we just created.:

Now we can select the subscription and use it as normal:

Now Happy automating again J

Another Update on the VMQ Issue with Emulex and HP

I wrote my last blog on this topic 6 months ago. Meanwhile I have seen several firmware and driver updates from Emulex, usually followed by HP several weeks later. I’m still talking about the ongoing VMQ problems with  HP/Emulex 554FLB CNAs in HP BL460c Gen 8 blade servers in c7000 blade enclosures. Meanwhile I have tested several incarnations of this firmware/driver combination in our own Azure Pack cloud environment. I already found out in previous attempts, that a switch to new Emulex firmware and HP/Emulex drivers, including the switch from VMQ disabled to enabled, can be disruptive and hosts and VMs need to be restarted if things turn bad.

And it did turn bad on all four occasions I tried until now. In July 2014 our hopes were raised by Emulex when Mark Jones from Emulex posted the first update on July 23 2014. I tried them out but not until HP released their OEM specific driver for the CNA. It didn’t take me very long to find out that my test guest cluster quickly got disconnected during a live migration with one node being evicted from the cluster during the black-out. First opportunity: failed

On August 4 a special release was issued for non-OEM Emulex branded cards. I decided not to try this out on our HP/Emulex CNAs. I later found out that some people who tried had to call in HP to replace their servers. Second opportunity: Ignored

Just over a month later, another update appeared on the Emulex blog letting us know that HP versions of the firmware/drivers had been made publicly available. Previous versions had been considered but we found a mystery insider in the comments of our blog stating:

Without revealing NDA information, the HP driver released in September is not really a fix, it is also a workaround, with some caveats that don’t appear in the published notes. My understanding is that the fix delivered by Emulex was considered unstable by HP, and a truly “fixed” driver won’t be released until sometime in Q1 2015.

I didn’t have to think long. Third opportunity: Ignored


On October 21st 2014, another update appeared on the Emulex blog. I had become very suspicious of again another update and lacked the time or drive to once more spend many hours testing this version. What changed my mind was when fellow Hyper-V MVP Patrick Lownds sent a message to one of the MVP distribution lists, letting us know that HP had released their OEM version of the HP/Emulex 554FLB firmware and driver. READ MORE »

Windows Azure Pack Remote Console – Create the RD Gateway Farm with PowerShell


The community is equivalent to sharing knowledge and helping each other. One of those super motivated community members is Carsten Rachfal. I finally met him at the MVP summit. Somewhere during that week we had to walk from one building to another. I noticed has was dragging along a mobile office. Carsten explained that it contained his complete datacenter. Or to be more precise, a laptop with some crazy specs that contained the complete Cloud OS. He did a lot of work creating a completely automated installation of all the Cloud OS components with HA and perform functional configuration to end up with an environment that was demo or (if it wasn’t for the hardware) even production ready. No single click needed after the deployment process. There was one piece missing in his complete puzzle.

01 Puzzle

He had asked me a couple of times if I had a solution to complete his masterwork. But that is another thing about the community. Time. Somehow you never have enough of it. This week another reminder popped up in a DL and I forced it to the top of my priority list. His question was

I want automate the configuration of a high available RD Gateway for Windows Azure Pack Remote Console. How can I set the RD Gateway server farm members with PowerShell?

Carsten is a smart man. He has been struggling with this issue for a couple of months and it was going to complete his masterwork. He had looked at all the possible angles already.


Scale-Out File Servers – DNS Settings

Scale-Out File Server (SOFS) is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage such as Hyper-V. Scale-out file shares provide the ability to share the same folder from multiple nodes of the same cluster.

In this blog we assume you already have played around with SOFS and know the basics.


In this example I’m using a 2-node Windows Server 2012 R2 cluster with 4x 10GbE adapters.
NIC1 and NIC2 are dedicated for iSCSI traffic, used for the shared storage which is used to present LUNs to the SOFS cluster. These LUNs are then added as CSV and this is where the SMB3 shares are landing on.
NIC3 and NIC4 are converged using native NIC teaming with Team NICs for Management, CSV, and SMB Traffic.


When creating your Client Access Point (CAP) for the SOFS, the cluster service will register all the network interface IPs, that have the “Allow clients to connect through this network” setting enabled, in DNS for the CAP.