Posts tagged Windows Azure Pack

Azure Pack – Recover from a disaster

This post is not talking about recovering from a failed data center or Azure Site recovery procedures. No, it’s just “simple” recovery when everything went down for whatever kind of reason. I mean, we are humans and we make mistakes. Even in an environment as Azure they make mistakes. Remember the Storage outage last year? This was by a human making a mistake pushing an update. Sorry… shit happens.

But what if something happened and the power, storage, networking and suddenly the VM’s comes back up and you log in to Azure Pack and an error is on your screen? When we run projects at customers we build from the ground up and “assume” its running forever. In this post I want to discuss the proper way of bring an environment back up and running and what are the critical pointers you need to be aware of.

So let’s start the environment J

START DOMAIN CONTROLLERS
First we need to bring up all the domain controllers for each domain where fabric and azure pack components are running. If an ADFS instance is co-located on the domain controller and is using SQL, check after booting the SQL if the services are started correctly.

START HYPER-V SERVERS
If Hyper-V servers went down start them first.

The first 2 depend on how you have built your environment. You know about chicken and egg story? When you have one physical domain controller start this first. If you have only virtual domain controllers then first start Hyper-V Server. Just be careful when you have virtual domain controllers and your hyper-v server is joined to the domain that host the domain controller, that it can start the vm when booting the first hyper-v node. You would not be the first where I have seen that the cluster couldn’t start because domain was not up and domain couldn’t be brought up because storage and/or cluster couldn’t start. READ MORE »

VM Usage not updated in Azure Pack

One of our customers was building a billing solution for their Azure Pack environment. We just finished the implementation of the fabric and the configuration of Azure Pack. We started testing and all worked fine. We also configured the integration between SCOM and VMM for usage and configured usage in Azure Pack.

After a couple of weeks we started to implement UR5 for System Center and Azure Pack and all looked good… Until, After a while a developer came to me and said, are you using a VM with 0 cores and 0 GB’s ram and it’s up the whole hour? I said no, that machine is deleted for a couple of days. Then the panic started. What happened? How can it be the usage is off from VMM?

I had it once before that VM’s are not updated correctly in SCOM and quickly took a look at the Virtual Machine dashboard in SCOM. And yup, there were still VM’s that were deleted a while ago. Then I connected back to VMM and did a refresh of the Operation Manager Connector. A Completed w/ Info appeared. The error message:

When I tried to run that command VMM PoSh didn’t know the cmdlet. I know that VMM keeps all management pack updates in its installation directory so I logged on to the VMM server and went to the installation directory of VMM. There is a folder called ManagementPacks. I noticed that the date modified was related to the RU 5 installation release date:
READ MORE »

BYO-DSC with VM Roles (A VM DSC Extension alternative for WAPack)

Introduction

The Azure Pack IaaS solution is awesome, we can provide our tenants with a lot of pre-baked solutions in the form of VM Roles.
Tenant users can deploy these solutions without needing the knowledge on how to build the solutions themselves which is a great value add.

But not all customers want pre-baked solutions. Some customers will want to bring their own configurations / solutions with them and they don’t want your pre-baked stuff for multiple reasons (e.g. don’t comply with their standards or requirements). In Azure these customers can make use of the VM extensions. One of the missing pieces of tech in the Azure Pack / SCVMM IaaS solution. It is at this time, very difficult to empower tenant users to bring their own stuff.

In Azure we have a lot of VM extensions available, today I’m going to implement functionality in a VM Role which will behave similarly to the Azure DSC extension (As you probably know by now, I like DSC a lot).

Please note! The implementation will serve as a working example on how you could do this. If you have any questions, please ask them but I will not support the VM Role itself.

Scenario:

A tenant user wants to deploy configurations to their VMs themselves. As a configuration mechanism, the tenant user has chosen Desired State Configuration (DSC). If by any means possible, they want a similar approach in Azure as on your Azure Pack service.

In Azure you can zip your PowerShell script containing your DSC configuration together with the DSC resources it requires. This archive is then uploaded to your Azure Blob storage. The VM DSC Extension picks this archive file up, unpacks it and runs the configuration in the script to generate the MOF file. During this procedure the extension will take in configuration data and take user provided arguments into account.

Check out http://blogs.msdn.com/b/powershell/archive/2014/08/07/introducing-the-azure-powershell-dsc-desired-state-configuration-extension.aspx if you are seeking to use DSC in Azure with the VM DSC Extension.

In our Azure Pack VM Role implementation we will try to mimic all this by letting the tenant user zip up its configuration script together with the DSC resources in the same way as they are used to with the Azure DSC extension. In fact, we will use the Azure PowerShell module to do this. Then, because we don’t have blob storage in our implementation (yet), we assume the tenant user has a web server in place where the tenant user will make this zip file available. On this same location a PSD1 file could be published containing DSC configuration data. Also, the VM Role will take arguments as well.

Prepare a configuration

First let’s create a configuration archive (ZIP file) and a configuration data file (PSD1 file). Then we will stage everything on a web server.

First Create a configuration script for demo purposes.

configuration MyNewDomain
{
param
(
    [String]$DomainName,
 
    [String]$SafeModePassword
)
    $secpasswd = ConvertTo-SecureString $SafeModePassword -AsPlainText -Force
    $SafemodeAdminCred = New-Object System.Management.Automation.PSCredential ("TempAccount", $secpasswd)
 
    Import-DscResource -ModuleName xActiveDirectory
 
    node localhost
    {
        WindowsFeature ADDS
        {
            Name =  'AD-Domain-Services'
            Ensure = 'Present'
        }
 
        xADDomain MyNewDomain
        {
            DomainName = $DomainName
            SafemodeAdministratorPassword = $SafemodeAdminCred
            DomainAdministratorCredential = $SafemodeAdminCred # used to check if domain already exists. Domain Administrator will have password of local administrator
            DependsOn = '[WindowsFeature]ADDS'
        }
    }
}

The configuration will produce a MOF file which will make sure that the AD-Domain-Services role is installed and then promote the VM to be the first domain controller of a new domain. The domain name and safe mode password are defined as a parameter and thus can be user configurable at MOF file generation time. The password is taken as a string and then converted to a PowerShell credential object. The configuration needs the xActiveDirectory DSC Module and therefore, this module must be packaged up with the script.

READ MORE »

Integrating VM Role with Desired State Configuration Part 10 – Closing notes

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic.  The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents.  Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

At this time we cannot lock out the VM controls from the tenant user or manipulate the provisioning state. I added my feedback for more control at the WAPack feedback site here: http://feedback.azure.com/forums/255259-azure-pack/suggestions/6391843-option-to-make-a-vm-inaccessible-for-tenant-user-a

Bugs / issues

While developing this series I faced some issues / bugs which I logged at connect:

What’s next?

First I will do a speaker session at the SCUG/Hyper-V.nu event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at www.hyper-v.nu!

Integrating VM Role with Desired State Configuration Part 9 – Create a Domain Join DSC resource

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

I created a DSC resource to handle the domain joining using the new Class based definition method which was at the time defined as “experimental design” (see http://blogs.msdn.com/b/powershell/archive/2014/12/10/wmf-5-0-preview-defining-quot-experimental-designs-quot-and-quot-stable-designs-quot.aspx for more info). I did this because this is the way forward for authoring DSC resources and I find the experience building the resource to be a lot easier in this fashion.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here: http://www.microsoft.com/en-us/download/details.aspx?id=45883).

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.


If you want more insights on why it is easier using class defined resources opposed to the WMF 4 introduced ‘script module’ resources, I advise you to look at an excellent blog post created by Ravikanth Chaganti here: http://www.powershellmagazine.com/2014/10/06/class-defined-dsc-resources-in-windows-management-framework-5-0-preview/. If you want to know more about the PowerShell Class implementation in general you should definitely have a Beer with Trevor Sullivan here: http://trevorsullivan.net/2014/10/25/implementing-a-net-class-in-powershell-v5/.


I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).

BG_VMRole_DSC_P09P001

BG_VMRole_DSC_P09P002

As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).

BG_VMRole_DSC_P09P003

Because WMF 5 is not RTM yet, I think the behavior of the WebDownloadManager will be adjusted in a later version. In the mean time I have logged this behavior on connect here: https://connect.microsoft.com/PowerShell/feedback/details/1143212

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.

READ MORE »

Windows Azure Pack vNext

Windows Azure Pack was released in October 2013. It enables you to provide cloud services from your own datacenter. Although Microsoft is working towards more consistency between Microsoft Azure and Windows Azure Pack there are still many differences between the two. I can remember someone at TechEd North America explaining to me that Microsoft Azure and Windows Azure Pack are like two circles, that currently have some overlap. Microsoft is working hard to get more overlap in these two circles and eventually ending up with one circle. You probably have seen that end state many times. It is the Cloud OS vision. With in the middle ONE Consistent platform.

Cloud OS

But we are not there yet. There is still a lot of work to be done.

So when will Windows Azure Pack vNext be released?

Microsoft recently announced that Windows Server and System Center will get their final release in 2016. Based on that information I get a lot of questions on the future release of Windows Azure Pack.

Let me ask you this. Are you always waiting for that new phone to be available? And by the time that phone is available, another new phone with new features is announced. You decide to wait some more . In the end, you never make a decision.

The only thing constant in IT is change. We get access to new features that enables new scenarios. New scenarios creates new challenges. Which results in other features being developed to solve those challenges and opening new scenarios again. There is always some new feature or version on the horizon. You can wait for ever and do nothing or evolve with the features and scenarios as they become available.

Do not wait. It is even super important to get started today, if you haven’t already. Windows Azure Pack provides IAAS, Websites, Database as a Service, Service Bus and Automation and there is a rich eco system with 3rd party solutions that enhance the stack even more. You can benefit from cloud services in your own datacenter TODAY!

We use Windows Azure Pack already. How long do we have to wait for new features?

Now, this is interesting. Microsoft releases an update rollup to all the System Center products and Windows Azure Pack every quarter. Initially these were mainly fixes to issues in the platform. But looking at the number of features that are added in the more recent update rollups that is changing drastically.

UR

Just to highlight two features from the latest update rollup.

These aren’t minor changes either. It greatly improves what already is a rich platform today. But it also ensures that you get access to new features in the current version every three months FOR FREE!! How cool is that. And YOU get to decide what features will be part of the upcoming update rollups by submitting suggestions or voting for existing suggestions on the user voice.

Integrating VM Role with Desired State Configuration Part 8 – Create, deploy and validate the final VM Role

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the VM Role Resource Definition and Resource Extension that was built in an earlier post will be updated with the additional required steps (3+). Then the VM Role gets deployed and we will look at the result to validate everything works as expected.

BG_VMRole_DSC_P08P001

Extend the Resource Extension

First off we extend the Resource Extension with some additional steps.
We then copy the current resource extension and give it another name so the previous state is safeguarded (if you did not create the VM Role resource definition and extension packages in part 3, you can download them here if you want to follow along: http://1drv.ms/1urL9AM).

BG_VMRole_DSC_P08P002

Next open the copied resource extension in the VM Role Authoring Tool and increase the version number to 2.0.0.0.

BG_VMRole_DSC_P08P003

READ MORE »

Integrating VM Role with Desired State Configuration Part 7 – Creating a configuration document with encrypted content

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the certificate files used for the configuration document encryption are created. Also an example configuration will be created which will have encrypted sensitive data.

Issue with CNG generated keys

While testing out which certificate template settings would do the job intended for this blog post, I stumbled upon an interesting finding (bug?). Apparently the LCM uses .NET methods for accessing certificate keys. When the certificate keys are generated using the Certificate Next Generation API (see: https://technet.microsoft.com/en-us/library/cc730763(v=ws.10).aspx) the private key is not accessible for the LCM. It is also not visible when using the PowerShell Cert PS Drive.

I searched the internet and found an interesting blog which I think explains the issue. The blog can be found here:  http://blogs.msdn.com/b/alejacma/archive/2009/12/22/invalid-provider-type-specified-error-when-accessing-x509certificate2-privatekey.aspx.

The blog post writes:

CRYPT_ACQUIRE_ALLOW_NCRYPT_KEY_FLAG

This function will attempt to obtain the key by using CryptoAPI. If that fails, this function will attempt to obtain the key by using the Cryptography API: Next Generation (CNG).

The pdwKeySpec variable receives the CERT_NCRYPT_KEY_SPEC flag if CNG is used to obtain the key.

pdwKeySpec [out]

CERT_NCRYPT_KEY_SPEC

The key is a CNG key.

.NET is not CNG aware yet (at least up to version 3.5 SP1). It uses CryptAcquireContext instead of CryptAcquireCertificatePrivateKey and CryptAcquireContext has no flags to deal with CNG.

A possible workaround to this may be to use CryptoAPI/CNG API directly to deal with CNG keys.

Apparently the issue persisted in .NET 4+. Let’s look at the differences.

When using a legacy cryptographic service provider (CSP), the private key is discovered and represented when enumerated through the certificate PSDrive.

BG_VMRole_DSC_P07P001

When the LCM needs it, it is accessible.

BG_VMRole_DSC_P07P002

When a CNG provider is used instead, the certificate PSDrive does show a key is present but does not discover and enumerates it.

BG_VMRole_DSC_P07P003

When the LCM needs it, it is not accessible.

BG_VMRole_DSC_P07P004

I reported this as a bug on connect. If you find this important, please vote it up: https://connect.microsoft.com/PowerShell/feedback/details/1110885/private-key-not-accessible-for-dsc-lcm-when-key-is-generated-using-cng-instead-of-legacy-csp

READ MORE »

Integrating VM Role with Desired State Configuration Part 6 – PFX Repository Website

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the PFX Repository website is created which is accessed during VM deployment to download a PFX container belonging to the configuration ID. As a DSC configuration ID can be assigned to many LCM instances simultaneously, the client authentication certificate cannot be used for configuration document encryption purposes as these certificates are unique for each instance.

PFX Website and functionality design

Every configuration document published to the DSC Pull Server will have an associated PFX container containing both the public and private key pairs used to encrypt / decrypt any potential sensitive data included in the document. If the configuration document currently does not have sensitive data, a PFX is issued nonetheless as sensitive data could be added in a later stage.

The PFX Website will be available over HTTPS only and will require client certificate authentication to be accessed. The client authentication certificates assigned to the VMs during deployment will be the allowed certificates.

A unique pin code used for creating and opening a PFX file will be available via the website as well. In my opinion this is a better practice then using a pre-defined pin code for all PFX files. It is still not the ultimate solution but I think the route taken is secure enough for now. If you have suggestions improving this bit please reach out!

The certificate containing the public key will be saved to a repository available to the configuration document generators. For now this will be a local directory.

Prerequisites

The Computer on which the PFX Website gets deployed can either be domain joined or be a workgroup member. In my case I use the DSC Pull Server from the previous post as I don’t have a lot of resources.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the PFX Website on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the PFX Website.

  • pfx.domain.tld (pfx.hyperv.nu)

READ MORE »

Integrating VM Role with Desired State Configuration Part 5 – Resource Kit

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this (relatively short) post the DSC modules from the DSC resource kit will be added to the DSC Pull Server modules repository. This makes the DSC resources available for download by the LCM instances directly from the Pull Server. When an LCM Pulls it’s configuration it parses the configuration for module information. If the LCM finds it is missing modules or has incorrect versions of them, it will try to download them from the Pull Server. If it can’t get them from the Pull Server, configuration will fail.

First let’s download the latest resource kit (v9 at the time of writing). You can find it here: https://gallery.technet.microsoft.com/scriptcenter/DSC-Resource-Kit-All-c449312d. Save it somewhere reachable for the DSC Pull Server.

Installing the Resource Kit

We will install the modules for the DSC Pull Server itself so they can be used for creating configuration documents later on. I created a little script to handle this process automatically. Let’s take a look:

#requires -version 5
param
(
    [Parameter(Mandatory=$true,
                HelpMessage='Enter a filepath to the Resource Kit Zip File.')]
    [ValidateScript({
                        if ((Test-Path -Path $_) -and ($_.split('.')[-1]) -eq 'zip')
                        {
                            $true
                        }
                        else
                        {
                            $false
                        }
                    })]
    [String]$Path,
        
    [Parameter(HelpMessage='When the Force switch is specified, modules will forcefully be overwritten')]
    [Switch]$Force
)
Process
{
    $ExpandDir = New-Item -Path $env:TEMP -Name ResourceKitExtract -ItemType Directory -Force
    Unblock-File -Path $Path
    Expand-Archive -Path $Path -DestinationPath $ExpandDir.FullName -Force
    $Modules = Get-ChildItem -Path "$($ExpandDir.FullName)\All Resources"
    foreach ($M in $Modules)
    {
        $DestinationPath = "$env:ProgramFiles\WindowsPowerShell\Modules\$($M.Name)"
        #check if module already exists
        if (Test-Path -Path $DestinationPath)
        {
            [version]$NewVersion = ((Get-ChildItem -Path $M.FullName -Filter '*.psd1' | Get-Content | Select-String "ModuleVersion").tostring()).substring(16).Replace("'", "")
            [version]$CurVersion = ((Get-ChildItem -Path $DestinationPath -Filter '*.psd1' | Get-Content | Select-String "ModuleVersion").tostring()).substring(16).Replace("'", "")
            if ($NewVersion -gt $CurVersion)
            {
                Write-Verbose -Message "Module $($M.Name) already exists but is newer in provided resource kit and will be overwritten"
                Write-Verbose -Message "Current Version: $CurVersion"
                Write-Verbose -Message "Resource Kit Version: $NewVersion"
                $overwrite = $true
            }
 
            if ($CurVersion -gt $NewVersion)
            {
                Write-Verbose -Message "Module $($M.Name) already exists but the resource kit version is older than the version on the system" -Verbose
                Write-Verbose -Message "Current Version: $CurVersion" -Verbose
                Write-Verbose -Message "Resource Kit Version: $NewVersion" -Verbose
                Write-Verbose -Message "If you want to overwrite the module, please manually remove the module first: $DestinationPath" -Verbose
            }
 
            if ($overwrite -or $Force -and -not $Newer)
            {
                if ($force)
                {
                    Write-Verbose -Message "Module $($M.Name) will be overwritten as specified by the Force switch"
                }
                Remove-Item $DestinationPath -Force -Recurse
                Copy-Item -Path $M.FullName -Destination $DestinationPath -Force -Recurse
            }
        }
        else
        {
            Write-Verbose -Message "Module $($M.Name) will be added"
            Copy-Item -Path $M.FullName -Destination $DestinationPath -Force -Recurse
        }
    }
    $ExpandDir | Remove-Item -Force -Recurse
}

So what does the script do?

  • It verifies if it’s run in at least PowerShell v5. This is required as the Archive CMDlets are available only in V5.
  • It takes the path to the resource kit zip file as the Path parameter. The path will be validated and as an additional check, it is verified if the file has the .zip file extension. If the path does not exist or the file does not have the zip file extension, script execution is canceled.
  • It provides the invoker with a force switch which will forcefully overwrite all modules with the resource kit content unless the module which is already on the system is newer (handy if you made a manual module change (which is not the best practice by the way! Create a community copy instead) and want to revert back to the original.
  • It will create a temporary directory in the Temp folder to expand the resource kit zip file to.
  • It will unblock the Zip file (unblocking all Zip content with it).
  • It will expand the resource kit zip file to the temporary directory.
  • It iterates through every module available in the resource kit and does the following:
    • It tests if the destination path already exists (which would mean the module is already installed.
      • If the module already exists, the module on the system and the module from the resource kit are checked for their version.
        • If the version in the resource kit is newer, the module will be overwritten.
        • If the version in the resource kit is older, a verbose message will always be printed informing the invoker to manually remove the existing module if so desired (fail save).
      • If the Force switch was specified while invoking and the currently installed version is not newer, the module will be overwritten by the module from the resource kit.
    • If the module does not exist on the system yet, it is copied.

Run the script from the console.

BG_VMRole_DSC_P05P001

When the script is done, you can validate the resources being available by running Get-DSCResource.

READ MORE »