Posts tagged Windows Azure Pack

VMM 2012 R2 UR7 – Issue NVGRE Gateways

Last week we implemented Update Rollup 7 for System Center Virtual Machine Manager 2012 R2 at one of our customers. After implementation we experienced some strange issues on the NVGRE Gateway Cluster. When a tenant removed his network from the Azure Pack portal, the network is removed from VMM and the VMM Database, but the resource is still online on the NVGRE cluster. This isn’t a problem until a failover occurs. Then the resource and only that resource will fail to start on the other node. Also not a big issue, all other networks comes online and start function normally.

BUT, the cluster role is in a failed state and will start playing tennis between 2 nodes to try to bring the resource completely online. And this becomes annoying. Because with each failover of a node the connection for the tenant VMs drops for a second.

The solution for now is simple but not something you would like to do every day until the fix is there. READ MORE »

Get more value out of your Windows Azure Pack environment

In the last two years we have performed numerous deployments of Windows Azure Pack. Enabling the Cloud OS for Service Provider and Enterprises. We have gained serious experience with these engagements. Besides technical knowledge, we have also learned that the success of cloud services starts with the people in the organization itself. Many organizations still have different departments for the underlying fabric components. These departments work in silos, each having their own targets and priorities. ITSM tooling is in place for digital processes between the silos. In theory this sounds like a solid construction, but in reality it is slowing these departments down, forcing the internal customer to look alternative cloud services, resulting in shadow IT.

The key to a successful project is the collaboration of all the involved departments. Depending on the size of the organization you can form a team consisting of all the departments or a key user from each department. It is crucial that they start to understand the value of abstraction, self-service and automation. Normally they already have parts of that implemented within their own department, but now it spans all departments.

Don’t get me wrong. This is not easy. It is actually the hardest part of a successful cloud transformation.

I have heard a lot of folks say that Windows Azure Pack and all depending components for its cloud services are hard to implement. I felt like that when I started with Windows Azure Services for Windows Server (the predecessor of Windows Azure Pack) in 2012. But in the end it is just like learning to speak and write another language. Once you master it, it is repeatable. You can dictate the software. How different is this with people. Every person has its own language that you must get to master in some degree. But you can never dictate them.

I was asked this week: “What is the reason that you are so successful in the Netherlands with Cloud OS deployments?”

It is a small country, three and a half hours is about the longest drive you can do, without driving in circles of course or hitting traffic (the downside of a lot of people on a tiny piece of earth).

I’m convinced it comes down to this;


Lessons learned: DSC Pull Security and Integration with WAPack [DUPSUG]


Last Dutch PowerShell User Group (DUPSUG) on May 26th I presented a session on end-2-end Secure DSC Pull Services. The demo scripts can be found here: and I have recorded the demo and posted it on youtube for your review.

On top of that, I demoed interaction / integration between components like DSC Web Pull Server, PKI, VM Role, SMA and Hyper-V.

In this blog post I’m going to describe and share the demo pieces I have shown for the integration / interaction demo. It is a build up from the previous 10 part blog series on DSC integration with Windows Azure Pack VM Roles. So if you are missing pieces to follow or prerequisite knowledge, please start reading here:

During this post, links will be provided to download the presentation and the files.


Azure Stack – What’s new and what’s changed

At Ignite 2015 Microsoft announced Microsoft Azure Stack. Microsoft brings with this version literally the Public Azure to your own Datacenter. Azure stack will contain the same bits as they run in Azure. So that’s looking really promising as I can’t even imagine how many services they offer in Azure. The big keyword here is consistency. When you as a tenant creating a new deployment they will allow you to take that deployment and run it in Azure, the Service Provider running Azure Stack and your own data center if you are running Azure Stack. And that’s a big change versus the last 2 editions of Azure Pack. But as Daniel Neumann  mentioned on his blog, it is not an updated version of Azure pack, but an entirely new product. In this blog post I am going to highlight the new features that makes all this consistency possible. You see in the image below that Azure Stack and Azure consist of the same building blocks, starting with the Cloud Infrastructure or as we also know it as the fabric. On top of that they provide the Azure portal and on top of that we deploy our services no matter if it is running Windows or Linux.


Windows Azure Pack – VM Checkpoints

The functionality we were all (and our customers) have been waiting for is reality with the release of WAP UR6: VM Checkpoints.

This is another great example of the WAP team interacting with the community and prioritizing based on the uservoice, VM Checkpoints was on number 4 with 225 votes.

With UR6 we have some new buttons in the bottom menu in the Virtual Machine view
WAP UR6 Menu
Allowing us to create checkpoints but also restoring checkpoints from Windows Azure Pack.

Creating Checkpoints

When we click the checkpoint button we get a dialog to create a new checkpoint:
Give it a name and a good description why you make the checkpoint.

Windows Azure Pack will now instruct Virtual Machine Manager (VMM) through Service Provider Foundation (SPF) to make a checkpoint of the selected Virtual Machine.
Create Checkpoint VMM

When the job is finished we can take a look in Hyper-V Manager to see the checkpoint is actually made. READ MORE »

Azure Pack – Recover from a disaster

This post is not talking about recovering from a failed data center or Azure Site recovery procedures. No, it’s just “simple” recovery when everything went down for whatever kind of reason. I mean, we are humans and we make mistakes. Even in an environment as Azure they make mistakes. Remember the Storage outage last year? This was by a human making a mistake pushing an update. Sorry… shit happens.

But what if something happened and the power, storage, networking and suddenly the VM’s comes back up and you log in to Azure Pack and an error is on your screen? When we run projects at customers we build from the ground up and “assume” its running forever. In this post I want to discuss the proper way of bring an environment back up and running and what are the critical pointers you need to be aware of.

So let’s start the environment J

First we need to bring up all the domain controllers for each domain where fabric and azure pack components are running. If an ADFS instance is co-located on the domain controller and is using SQL, check after booting the SQL if the services are started correctly.

If Hyper-V servers went down start them first.

The first 2 depend on how you have built your environment. You know about chicken and egg story? When you have one physical domain controller start this first. If you have only virtual domain controllers then first start Hyper-V Server. Just be careful when you have virtual domain controllers and your hyper-v server is joined to the domain that host the domain controller, that it can start the vm when booting the first hyper-v node. You would not be the first where I have seen that the cluster couldn’t start because domain was not up and domain couldn’t be brought up because storage and/or cluster couldn’t start. READ MORE »

VM Usage not updated in Azure Pack

One of our customers was building a billing solution for their Azure Pack environment. We just finished the implementation of the fabric and the configuration of Azure Pack. We started testing and all worked fine. We also configured the integration between SCOM and VMM for usage and configured usage in Azure Pack.

After a couple of weeks we started to implement UR5 for System Center and Azure Pack and all looked good… Until, After a while a developer came to me and said, are you using a VM with 0 cores and 0 GB’s ram and it’s up the whole hour? I said no, that machine is deleted for a couple of days. Then the panic started. What happened? How can it be the usage is off from VMM?

I had it once before that VM’s are not updated correctly in SCOM and quickly took a look at the Virtual Machine dashboard in SCOM. And yup, there were still VM’s that were deleted a while ago. Then I connected back to VMM and did a refresh of the Operation Manager Connector. A Completed w/ Info appeared. The error message:

When I tried to run that command VMM PoSh didn’t know the cmdlet. I know that VMM keeps all management pack updates in its installation directory so I logged on to the VMM server and went to the installation directory of VMM. There is a folder called ManagementPacks. I noticed that the date modified was related to the RU 5 installation release date:

BYO-DSC with VM Roles (A VM DSC Extension alternative for WAPack)


The Azure Pack IaaS solution is awesome, we can provide our tenants with a lot of pre-baked solutions in the form of VM Roles.
Tenant users can deploy these solutions without needing the knowledge on how to build the solutions themselves which is a great value add.

But not all customers want pre-baked solutions. Some customers will want to bring their own configurations / solutions with them and they don’t want your pre-baked stuff for multiple reasons (e.g. don’t comply with their standards or requirements). In Azure these customers can make use of the VM extensions. One of the missing pieces of tech in the Azure Pack / SCVMM IaaS solution. It is at this time, very difficult to empower tenant users to bring their own stuff.

In Azure we have a lot of VM extensions available, today I’m going to implement functionality in a VM Role which will behave similarly to the Azure DSC extension (As you probably know by now, I like DSC a lot).

Please note! The implementation will serve as a working example on how you could do this. If you have any questions, please ask them but I will not support the VM Role itself.


A tenant user wants to deploy configurations to their VMs themselves. As a configuration mechanism, the tenant user has chosen Desired State Configuration (DSC). If by any means possible, they want a similar approach in Azure as on your Azure Pack service.

In Azure you can zip your PowerShell script containing your DSC configuration together with the DSC resources it requires. This archive is then uploaded to your Azure Blob storage. The VM DSC Extension picks this archive file up, unpacks it and runs the configuration in the script to generate the MOF file. During this procedure the extension will take in configuration data and take user provided arguments into account.

Check out if you are seeking to use DSC in Azure with the VM DSC Extension.

In our Azure Pack VM Role implementation we will try to mimic all this by letting the tenant user zip up its configuration script together with the DSC resources in the same way as they are used to with the Azure DSC extension. In fact, we will use the Azure PowerShell module to do this. Then, because we don’t have blob storage in our implementation (yet), we assume the tenant user has a web server in place where the tenant user will make this zip file available. On this same location a PSD1 file could be published containing DSC configuration data. Also, the VM Role will take arguments as well.

Prepare a configuration

First let’s create a configuration archive (ZIP file) and a configuration data file (PSD1 file). Then we will stage everything on a web server.

First Create a configuration script for demo purposes.

The configuration will produce a MOF file which will make sure that the AD-Domain-Services role is installed and then promote the VM to be the first domain controller of a new domain. The domain name and safe mode password are defined as a parameter and thus can be user configurable at MOF file generation time. The password is taken as a string and then converted to a PowerShell credential object. The configuration needs the xActiveDirectory DSC Module and therefore, this module must be packaged up with the script.


Integrating VM Role with Desired State Configuration Part 10 – Closing notes

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes

So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic.  The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents.  Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

At this time we cannot lock out the VM controls from the tenant user or manipulate the provisioning state. I added my feedback for more control at the WAPack feedback site here:

Bugs / issues

While developing this series I faced some issues / bugs which I logged at connect:

What’s next?

First I will do a speaker session at the SCUG/ event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at!

Integrating VM Role with Desired State Configuration Part 9 – Create a Domain Join DSC resource

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

I created a DSC resource to handle the domain joining using the new Class based definition method which was at the time defined as “experimental design” (see for more info). I did this because this is the way forward for authoring DSC resources and I find the experience building the resource to be a lot easier in this fashion.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here:

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.

If you want more insights on why it is easier using class defined resources opposed to the WMF 4 introduced ‘script module’ resources, I advise you to look at an excellent blog post created by Ravikanth Chaganti here: If you want to know more about the PowerShell Class implementation in general you should definitely have a Beer with Trevor Sullivan here:

I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).



As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).


Because WMF 5 is not RTM yet, I think the behavior of the WebDownloadManager will be adjusted in a later version. In the mean time I have logged this behavior on connect here:

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.


Our Sponsors

Powered by