Integrating VM Role with Desired State Configuration Part 10 – Closing notes

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic.  The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents.  Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

At this time we cannot lock out the VM controls from the tenant user or manipulate the provisioning state. I added my feedback for more control at the WAPack feedback site here: http://feedback.azure.com/forums/255259-azure-pack/suggestions/6391843-option-to-make-a-vm-inaccessible-for-tenant-user-a

Bugs / issues

While developing this series I faced some issues / bugs which I logged at connect:

What’s next?

First I will do a speaker session at the SCUG/Hyper-V.nu event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at www.hyper-v.nu!

Integrating VM Role with Desired State Configuration Part 9 – Create a Domain Join DSC resource

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

I created a DSC resource to handle the domain joining using the new Class based definition method which was at the time defined as “experimental design” (see http://blogs.msdn.com/b/powershell/archive/2014/12/10/wmf-5-0-preview-defining-quot-experimental-designs-quot-and-quot-stable-designs-quot.aspx for more info). I did this because this is the way forward for authoring DSC resources and I find the experience building the resource to be a lot easier in this fashion.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here: http://www.microsoft.com/en-us/download/details.aspx?id=45883).

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.


If you want more insights on why it is easier using class defined resources opposed to the WMF 4 introduced ‘script module’ resources, I advise you to look at an excellent blog post created by Ravikanth Chaganti here: http://www.powershellmagazine.com/2014/10/06/class-defined-dsc-resources-in-windows-management-framework-5-0-preview/. If you want to know more about the PowerShell Class implementation in general you should definitely have a Beer with Trevor Sullivan here: http://trevorsullivan.net/2014/10/25/implementing-a-net-class-in-powershell-v5/.


I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).

BG_VMRole_DSC_P09P001

BG_VMRole_DSC_P09P002

As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).

BG_VMRole_DSC_P09P003

Because WMF 5 is not RTM yet, I think the behavior of the WebDownloadManager will be adjusted in a later version. In the mean time I have logged this behavior on connect here: https://connect.microsoft.com/PowerShell/feedback/details/1143212

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.

READ MORE »

Windows Azure Pack vNext

Windows Azure Pack was released in October 2013. It enables you to provide cloud services from your own datacenter. Although Microsoft is working towards more consistency between Microsoft Azure and Windows Azure Pack there are still many differences between the two. I can remember someone at TechEd North America explaining to me that Microsoft Azure and Windows Azure Pack are like two circles, that currently have some overlap. Microsoft is working hard to get more overlap in these two circles and eventually ending up with one circle. You probably have seen that end state many times. It is the Cloud OS vision. With in the middle ONE Consistent platform.

Cloud OS

But we are not there yet. There is still a lot of work to be done.

So when will Windows Azure Pack vNext be released?

Microsoft recently announced that Windows Server and System Center will get their final release in 2016. Based on that information I get a lot of questions on the future release of Windows Azure Pack.

Let me ask you this. Are you always waiting for that new phone to be available? And by the time that phone is available, another new phone with new features is announced. You decide to wait some more . In the end, you never make a decision.

The only thing constant in IT is change. We get access to new features that enables new scenarios. New scenarios creates new challenges. Which results in other features being developed to solve those challenges and opening new scenarios again. There is always some new feature or version on the horizon. You can wait for ever and do nothing or evolve with the features and scenarios as they become available.

Do not wait. It is even super important to get started today, if you haven’t already. Windows Azure Pack provides IAAS, Websites, Database as a Service, Service Bus and Automation and there is a rich eco system with 3rd party solutions that enhance the stack even more. You can benefit from cloud services in your own datacenter TODAY!

We use Windows Azure Pack already. How long do we have to wait for new features?

Now, this is interesting. Microsoft releases an update rollup to all the System Center products and Windows Azure Pack every quarter. Initially these were mainly fixes to issues in the platform. But looking at the number of features that are added in the more recent update rollups that is changing drastically.

UR

Just to highlight two features from the latest update rollup.

These aren’t minor changes either. It greatly improves what already is a rich platform today. But it also ensures that you get access to new features in the current version every three months FOR FREE!! How cool is that. And YOU get to decide what features will be part of the upcoming update rollups by submitting suggestions or voting for existing suggestions on the user voice.

Integrating VM Role with Desired State Configuration Part 8 – Create, deploy and validate the final VM Role

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the VM Role Resource Definition and Resource Extension that was built in an earlier post will be updated with the additional required steps (3+). Then the VM Role gets deployed and we will look at the result to validate everything works as expected.

BG_VMRole_DSC_P08P001

Extend the Resource Extension

First off we extend the Resource Extension with some additional steps.
We then copy the current resource extension and give it another name so the previous state is safeguarded (if you did not create the VM Role resource definition and extension packages in part 3, you can download them here if you want to follow along: http://1drv.ms/1urL9AM).

BG_VMRole_DSC_P08P002

Next open the copied resource extension in the VM Role Authoring Tool and increase the version number to 2.0.0.0.

BG_VMRole_DSC_P08P003

READ MORE »

Integrating VM Role with Desired State Configuration Part 7 – Creating a configuration document with encrypted content

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the certificate files used for the configuration document encryption are created. Also an example configuration will be created which will have encrypted sensitive data.

Issue with CNG generated keys

While testing out which certificate template settings would do the job intended for this blog post, I stumbled upon an interesting finding (bug?). Apparently the LCM uses .NET methods for accessing certificate keys. When the certificate keys are generated using the Certificate Next Generation API (see: https://technet.microsoft.com/en-us/library/cc730763(v=ws.10).aspx) the private key is not accessible for the LCM. It is also not visible when using the PowerShell Cert PS Drive.

I searched the internet and found an interesting blog which I think explains the issue. The blog can be found here:  http://blogs.msdn.com/b/alejacma/archive/2009/12/22/invalid-provider-type-specified-error-when-accessing-x509certificate2-privatekey.aspx.

The blog post writes:

CRYPT_ACQUIRE_ALLOW_NCRYPT_KEY_FLAG

This function will attempt to obtain the key by using CryptoAPI. If that fails, this function will attempt to obtain the key by using the Cryptography API: Next Generation (CNG).

The pdwKeySpec variable receives the CERT_NCRYPT_KEY_SPEC flag if CNG is used to obtain the key.

pdwKeySpec [out]

CERT_NCRYPT_KEY_SPEC

The key is a CNG key.

.NET is not CNG aware yet (at least up to version 3.5 SP1). It uses CryptAcquireContext instead of CryptAcquireCertificatePrivateKey and CryptAcquireContext has no flags to deal with CNG.

A possible workaround to this may be to use CryptoAPI/CNG API directly to deal with CNG keys.

Apparently the issue persisted in .NET 4+. Let’s look at the differences.

When using a legacy cryptographic service provider (CSP), the private key is discovered and represented when enumerated through the certificate PSDrive.

BG_VMRole_DSC_P07P001

When the LCM needs it, it is accessible.

BG_VMRole_DSC_P07P002

When a CNG provider is used instead, the certificate PSDrive does show a key is present but does not discover and enumerates it.

BG_VMRole_DSC_P07P003

When the LCM needs it, it is not accessible.

BG_VMRole_DSC_P07P004

I reported this as a bug on connect. If you find this important, please vote it up: https://connect.microsoft.com/PowerShell/feedback/details/1110885/private-key-not-accessible-for-dsc-lcm-when-key-is-generated-using-cng-instead-of-legacy-csp

READ MORE »

Interviewed by Carsten Rachfahl on Windows Azure Pack

Besides his mobile datacenter, Carsten Rachfahl  also brings his recording equipment to every event. He asked me for an interview a couple of times but we never got round to it. At the Technical Summit in Berlin I met with Carsten and we finally got some time to talk. Check out the recording here

interview

Did you ever have a look at all the work Carsten is doing? It’s just unbelievable. As an Hyper-V MVP he makes podcasts, interviews, blog posts, presents at events and also finds some time to works. We did a podcast on Windows Azure Pack about a year ago.

Carsten’s blog is in German, but still very interesting to check out. Have a look at here.

Integrating VM Role with Desired State Configuration Part 6 – PFX Repository Website

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the PFX Repository website is created which is accessed during VM deployment to download a PFX container belonging to the configuration ID. As a DSC configuration ID can be assigned to many LCM instances simultaneously, the client authentication certificate cannot be used for configuration document encryption purposes as these certificates are unique for each instance.

PFX Website and functionality design

Every configuration document published to the DSC Pull Server will have an associated PFX container containing both the public and private key pairs used to encrypt / decrypt any potential sensitive data included in the document. If the configuration document currently does not have sensitive data, a PFX is issued nonetheless as sensitive data could be added in a later stage.

The PFX Website will be available over HTTPS only and will require client certificate authentication to be accessed. The client authentication certificates assigned to the VMs during deployment will be the allowed certificates.

A unique pin code used for creating and opening a PFX file will be available via the website as well. In my opinion this is a better practice then using a pre-defined pin code for all PFX files. It is still not the ultimate solution but I think the route taken is secure enough for now. If you have suggestions improving this bit please reach out!

The certificate containing the public key will be saved to a repository available to the configuration document generators. For now this will be a local directory.

Prerequisites

The Computer on which the PFX Website gets deployed can either be domain joined or be a workgroup member. In my case I use the DSC Pull Server from the previous post as I don’t have a lot of resources.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the PFX Website on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the PFX Website.

  • pfx.domain.tld (pfx.hyperv.nu)

READ MORE »

Integrating VM Role with Desired State Configuration Part 5 – Resource Kit

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this (relatively short) post the DSC modules from the DSC resource kit will be added to the DSC Pull Server modules repository. This makes the DSC resources available for download by the LCM instances directly from the Pull Server. When an LCM Pulls it’s configuration it parses the configuration for module information. If the LCM finds it is missing modules or has incorrect versions of them, it will try to download them from the Pull Server. If it can’t get them from the Pull Server, configuration will fail.

First let’s download the latest resource kit (v9 at the time of writing). You can find it here: https://gallery.technet.microsoft.com/scriptcenter/DSC-Resource-Kit-All-c449312d. Save it somewhere reachable for the DSC Pull Server.

Installing the Resource Kit

We will install the modules for the DSC Pull Server itself so they can be used for creating configuration documents later on. I created a little script to handle this process automatically. Let’s take a look:

So what does the script do?

  • It verifies if it’s run in at least PowerShell v5. This is required as the Archive CMDlets are available only in V5.
  • It takes the path to the resource kit zip file as the Path parameter. The path will be validated and as an additional check, it is verified if the file has the .zip file extension. If the path does not exist or the file does not have the zip file extension, script execution is canceled.
  • It provides the invoker with a force switch which will forcefully overwrite all modules with the resource kit content unless the module which is already on the system is newer (handy if you made a manual module change (which is not the best practice by the way! Create a community copy instead) and want to revert back to the original.
  • It will create a temporary directory in the Temp folder to expand the resource kit zip file to.
  • It will unblock the Zip file (unblocking all Zip content with it).
  • It will expand the resource kit zip file to the temporary directory.
  • It iterates through every module available in the resource kit and does the following:
    • It tests if the destination path already exists (which would mean the module is already installed.
      • If the module already exists, the module on the system and the module from the resource kit are checked for their version.
        • If the version in the resource kit is newer, the module will be overwritten.
        • If the version in the resource kit is older, a verbose message will always be printed informing the invoker to manually remove the existing module if so desired (fail save).
      • If the Force switch was specified while invoking and the currently installed version is not newer, the module will be overwritten by the module from the resource kit.
    • If the module does not exist on the system yet, it is copied.

Run the script from the console.

BG_VMRole_DSC_P05P001

When the script is done, you can validate the resources being available by running Get-DSCResource.

READ MORE »

Windows Azure Pack VM Role – Choose between differencing disks or dedicated disks

Windows Azure Pack was released in October 2013 and allows you to provide cloud services that are running in your own datacenter. Since its release we have deployed a lot Cloud OS environments. Most if not all deployments contained or were centered around Infrastructure As A Service.

To enable infrastructure as a service in your datacenter you need a couple of components.

  • Windows Azure Pack
  • System Center Service Provider Foundation
  • System Center Virtual Machine Manager
  • Hypervisor (almost all Windows Azure Pack IAAS functionality requires Hyper-V)

As a tenant in the Windows Azure Pack portal you can interact with virtual machines and virtual networks.

For deploying virtual machines you can choose between two methods.

  • Stand alone virtual machine
  • VM Role

GalleryItem

Stand alone virtual machine

The stand alone virtual machine is a one to one mapping to a VM Template in Virtual Machine Manager. The properties of the stand alone virtual machine live in VMM and can only be changed there. The stand alone virtual machine can be used to deploy a virtual machine with an operating system. The deployment wizard in Windows Azure Pack is easy and straight forward but cannot be customized. You are bound by the options in the existing wizard and the capabilities of the VM Template in Virtual Machine manager.

VM Role

The other method Windows Azure Pack provides to deploy virtual machines is the VM Role. The VM Role uses the service template engine in Virtual Machine Manager and combines that with a customizable deployment wizard in Windows Azure Pack. On top of the stand alone virtual machine method the VM Role provides the following capabilities

  • Application deployment in the virtual machine as an integral part of the deployment process
  • Customizable deployment wizard
  • Better interaction capabilities with Service Management Automation
  • Deploy and manage a single tier of one or multiple instances.
  • Servicing of the application through tenant configuration
  • Versioning of the VM Role with application updating capabilities

Stand alone virtual machine or the VM Role? Now this looks like an easy choice. And every customers reaction to this comparison is similar. The VM Role it is.

But…. There is one important thing to point out. The VM Role uses differencing disks.

Differencing disks

A differencing disk is a virtual hard disk you use to isolate changes to a virtual hard disk or the guest operating system by storing them in a separate file. A differencing disk is associated with another virtual hard disk that you select when you create the differencing disk. This means that the disk to which you want to associate the differencing disk must exist first. This virtual hard disk is called the “parent” disk and the differencing disk is the “child” disk. The parent disk can be any type of virtual hard disk (fixed or dynamically expanding). The differencing disk stores all changes that would otherwise be made to the parent disk if the differencing disk was not being used. The differencing disk provides an ongoing way to save changes without altering the parent disk. Multiple child disks can use the same parent disk.

The VM Role uses differencing disks for its virtual hard disks. A VM Role consists of one Operating System disks and optionally one or more data disks. In the VM Role configuration you define information (metadata) about each disk of that VM Role. The metadata is a family name and a version number. Additional filtering for the Operating System disk can be set with tags.

READ MORE »

Integrating VM Role with Desired State Configuration Part 4 – Pull Server

Series Index:

1. Introduction and Scenario
2. PKI
3. Creating the VM Role (step 1 and 2)
4. Pull Server
5. Resource Kit
6. PFX Repository Website
7. Creating a configuration document with encrypted content
8. Create, deploy and validate the final VM Role
9. Create a Domain Join DSC resource
10. Closing notes
__________________________________________________________

In this post the DSC Pull Server will be created and configured with Client Certificate Authentication. Let’s look at the design first.

DSC Pull Server Design

Again as with the PKI solution, we are dealing with a chicken and the egg situation. The company policy (explained in the introduction post) dictates no ad-hoc DSC configurations are allowed. All DSC configurations are only allowed to be deployed via a DSC Pull Server.  The DSC Pull Server will only be allowed to pass configuration documents (MOF files) if the LCM requesting such a file is trusted / authenticated. Also the DSC Pull Server itself should be trusted by the LCM instances for client certificate authentication to be available.

The DSC Pull Server website will be configured with an HTTPS binding only and it will be made available on the default HTTPS port (443) so it will be easy to make it available on-premises as well as on the internet. Because multiple websites will eventually be hosted on this server, Server Name Indication (SNI) will be enabled (host headers for HTTPS). The Web site will be configured to require both SSL and client authentication certificates.

The Web application pool associated with the DSC Pull Server website requires anonymous authentication to be available. When this is disabled, the website will actually not function so anonymous authentication will be configured.

BG_VMRole_DSC_P04P001

Because the setting ‘require client authentication certificates’ on its own accepts client certificates provided by any of the trusted Certificate Authorities known by the webserver, the IIS Client Certificate Mapping component of IIS will be installed as well to restrict it a bit more. A client certificate mapping will be configured for the DSC Pull Server website to map many certificates to one account. The certificates allowed can only be explicitly provided by the Enterprise CA and because of this, an issuer rule will be configured making sure this is the case. An additional deny mapping will be configured to deny all other implicitly ‘trusted’ client certificates (certificates chained to any of the servers trusted Certificate Authority’s).

Prerequisites

The Computer on which the DSC Pull Server gets deployed can either be domain joined or be a workgroup member. In my case I use another domain joined machine for simplicity reasons.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the DSC Pull Server on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the DSC Pull Web Service (as with the PKI solution).

  • dscpull.domain.tld (dscpull.hyperv.nu)

READ MORE »