The DSC Book
The DSC Book
“Forever” Edition
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
What is DSC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Not Just for Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
How Does DSC Compare to Group Policy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Cattle, Not Pets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Technology, not Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
A 2018 Update: What is DSC Good For? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Composite Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Creating a Composite Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Turning the Configuration into a Resource Module . . . . . . . . . . . . . . . . . . . . . . . . 75
Using the Composite Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Deploying the Composite Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Approach Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Partial Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Summarizing Partial Configuration Pros and Cons . . . . . . . . . . . . . . . . . . . . . . . . 79
Authoring a Partial Configuration MOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Configuring the LCM for Partial Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Partial Configuration Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Partial Configuration Authoritative Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Mix ‘n’ Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
File Naming Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Custom Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Before We Begin: Function-Based vs. Class-Based . . . . . . . . . . . . . . . . . . . . . . . . . 92
Writing the Functional Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Writing the Interface Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Preparing the Module for Use and Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Triggering a Reboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Here, you can see the default action for a too-long line - it gets word-wrapped, and a backslash
inserted at the wrap point to let you know. We try to avoid those situations, but they may sometimes
be unavoidable. When we do avoid them, it may be with awkward formatting, such as in the above
where we used backticks (‘) or:
Here, we’ve given up on neatly aligning everything to prevent a wrap situation. Ugly, but oh well.
You may also see this crop up in inline code snippets, especially the backslash.
If you are reading this book on a Kindle, tablet or other e-reader, then we hope you’ll
understand that all code formatting bets are off the table. There’s no telling what the
formatting will look like due to how each reader might format the page. We trust you know
enough about PowerShell not to get distracted by odd line breaks or whatever.
When you write PowerShell code, you should not be limited by these constraints. There is no reason
for you to have to use a backtick to “break” a command. Simply type out your command. If you
want to break a long line to make it easier to read without a lot of horizontal scrolling, you can hit
Enter after any of these characters:
• Open parenthesis (
• Open curly brace {
• Pipe |
A Note on Code Listings v
• Comma ,
• Semicolon ;
• Equal sign =
This is probably not a complete list, but breaking after any of these characters makes the most sense.
Anyway, we apologize for these artifacts.
Code Samples
Longer code listings are available at http://github.com/concentrateddon/TheDSCBookCode. Shorter
listings - because they’re often not useful as-is - are not necessarily replicated there. We really mean
for most of the code in this book to be illustrations, not copy-and-paste usable.
Introduction
Desired State Configuration, or DSC, was introduced as part of Windows Management Framework
(WMF) 4.0 - the same package that included Windows PowerShell 4.0. DSC is essentially the final
step originally envisioned for Windows PowerShell in inventor Jeffrey Snover’s Monad Manifesto³.
DSC builds upon a decade-long investment in Windows PowerShell that included broad cmdlet
coverage, a robust (yet simple) scripting language, Remoting, and more.
What is DSC?
Many organizations - maybe even the one you work for - have written configuration documents that
describe how certain computers should be configured. “In our environment, a domain controller has
this set of services installed, these pieces of software, and is configured this way.” When someone
sets up a new machine, they often consult these standards, which are designed to help configurations
stay consistent, reliable, and secure.
At a high level, DSC simply asks you to write those configuration documents in a specific way that
a computer, as well as a human, can understand. Rather than simply writing, “a domain controller
should have Active Directory, DNS, and DHCP installed,” you might write it like this:
1 WindowsFeature DnsServer {
2 Name = "DNS"
3 Ensure = "Present"
4 }
5 WindowsFeature DhcpServer {
6 Name = "DHCP"
7 Ensure = "Present"
8 }
9 WindowsFeature AD {
10 Name = "ADDS"
11 Ensure = "Present"
12 }
You would then deploy that configuration document directly to each domain controller, including
any new ones you want to create. DSC “reads” that document, and does whatever is necessary to
“make it happen.” It checks to make sure the specific services are installed, and if they’re not, it
installs them. Every so often, it can re-check to make sure everything is still configured correctly,
and it can even report back to you and let you know that everything is in order (or not).
³https://leanpub.com/themonadmanifestoannotated
Introduction 2
The actual process is actually a little more complex (but only a little bit), and your configurations can
obviously be a lot more detailed and comprehensive. But, at a high level, this is what DSC does. It
takes a human-readable document that is also machine-readable, and it does the work of configuring
systems to match your specification.
DSC doesn’t readily support a lot of application-time logic, it’s less suitable for the dynamic, people-
centric world of client computers.
It’s entirely possible that, in the future, Microsoft could rip out the guts of Group Policy and replace
it with DSC. Client-side logic could still pull DSC configurations dynamically from an AD domain
controller, just as Group Policy does today. It’d be a way of massively extending what Group Policy
can do, while keeping all the dynamic-targeting advantages of Group Policy. But today, you’ll mostly
see DSC discussed in terms of server workloads and more-or-less static configurations.
servers that are all serving the same function. You just won’t use a special, unique name for that
server.
This is a huge shift in thinking for most Windows administrators. Fortunately, you don’t need to go
all-in with that concept right off the bat. DSC will still let you set things up more or less the way
you do today, although you may find some parts of it seem inefficient or cumbersome. As you start
to shift your thinking, though, DSC will become an easier and more efficient way to manage your
machines.
Static and long-lived. We don’t think much about bare-metal servers these days, but by definition
they’re not something you just deploy in a moment and tear down a moment later. Bare-metal is
time-consuming and can be delicate; DSC is a great way at keeping these workloads configured the
same way, all the time. Proof? Azure Stack, a bare-metal solution that relies on thousands of lines
of DSC code (and more than a few custom resources) to keep its delicate balancing act in order.
Legacy. Ephemeral workloads might be the Now Hotness, but we all have (and likely always will
have) workloads that require a lot of delicate configuration, need to run in their own VM, and
are special snowflakes. DSC can be great at helping to keep these properly configured. You might
not ever use DSC to initially build one of these serves, but you might well use it to help control
configuration drift on them.
State-Coupled. We all have plenty of workloads where the configuration of the workload itself–it’s
“state,” if you will–is part of its install. That is, perhaps the data used by the workload is elsewhere,
but the workload’s identity and “bootstrap” code is part of the “system drive.” Think Exchange Server,
SharePoint Server, and SQL Server. These are “kinda special snowflakes” in that they’re difficult to
rebuild, and by their very nature you wouldn’t just tear them down and rebuild them all the domain.
Domain controllers fall into this category, because can you imagine the replication workload if you
just tore down half your Dns and built them from scratch every night? DSC can also be good at
maintaining configuration on these workloads.
We do not see a world where DSC is running on every single computing workload you own. The trick
is in finding the right place and time, and using the tool how it was intended to be used. Nobody’s
taking a Camry to an F-1 race, but nobody’s saying a Camry is a bad car. You just need the right
tool for the workload at hand.
Part 1: Design Decisions
DSC isn’t something you can just start installing. It’s not a solution like Exchange Server or SQL
Server; you don’t “install it, and then figure it out.” DSC is a set of technology components, and
they’re designed to fit together in a lot of different ways, to accommodate a lot of different business
needs. It’s critical that you understand these components up front, or you’re likely to deploy
something that’s painful at best, and a security nightmare at worse.
Designing DSC
Deciding how to “do” DSC in your environment is a huge deal. Decide wrong, and you could end
up not getting the flexibility you need, or you could end up making things way more complicated
than you need.
There are two important things to remember when it comes to designing your DSC approach:
• There is no one approach that is right for everyone - you need to choose an approach that
meets your needs.
• You need to understand the options thoroughly before deciding.
This chapter will help you understand what DSC is doing, how it does it, and what your options are
for designing your DSC approach.
The Players
The main actor in the world of DSC is the Local Configuration Manager, or LCM. The LCM is a
little piece of software that lives on each computer (“target node”) that you manage with DSC. The
LCM’s job is to read your configuration documents, see if the local computer is in compliance with
its configuration document, and potentially fix the computer so that it is compliance. The LCM is
really the main “moving part” in DSC.
A secondary player in the DSC game is your authoring computer. This is simply the machine where
you sit, and whose keyboard you press with your fingers.
A third and optional player is a DSC Pull Server. This is a normal Windows Server (usually Windows
Server 2012 or later) running Internet Information Service (IIS) and a special piece of additional
software that is the Pull Server itself. Technically, you can also turn a file server into a Pull Server,
but that lacks some of the functionality and security options of an IIS-based Pull Server. In this
book, I’ll assume you’re using the IIS-based Pull Server, rather than a file server. The Pull Server
is where you put all your configuration documents, as well as supporting code. You then configure
target nodes’ LCMs with the location of the Pull Server, and they go to the Pull Server to find their
configuration documents. It’s a little bit like how client computers “pull” their Group Policy objects
(GPOs) from a domain controller.
The Pull Server can also host a component called the Reporting Server. This is a separate web service,
also running under IIS, where target nodes’ LCMs can check in with a status report. This lets nodes
indicate that they’re in compliance with their configuration, or that they’re awaiting a restart, or
that a problem has occurred. It’s important, though, to understand the scope of the Reporting Server.
Designing DSC 8
First, that LCM status is just a single-digit code. It isn’t a whole problem report, or configuration
description, or anything like that. Also, there’s no actual… um… “reporting.” The Reporting Server
tracks node status in a simple local database, and it doesn’t provide any native reporting, dashboards,
or anything like that. This is the “Technology vs. Tool” thing I pointed out in the Introduction - we
have the technology, but no native tools for it.
So those are the players in the game. Now, you need to know the pieces that they play with.
The Pieces
The first and main “piece” in DSC is the Managed Object Format (MOF) file. This is a plain-text file
that contains your configuration specifications. It’s the “configuration document” that I’ve loosely
referred to. MOF is actually an industry-standard format developed by the Distributed Management
Task Force (DMTF), an industry group that includes Microsoft and other vendors. A MOF is not a
script - it’s simply a bunch of text instructions, written in a specific format that a computer can
easily interpret. MOF files aren’t terribly human-readable, but you can work through one if you
have to. Think of them as a kind of fancy INI file. The important thing to understand is that a MOF
is a static file - it doesn’t (except for one hair-splitting exception) contain any executable code, any
logic, or any decision-making capability. In that way, a MOF is like a Word document - it’s just a
static description of what a computer should look like.
So how do you create a MOF file? Well, you could use Notepad, if you’re the nitty-gritty, I-eat-
nails-for-breakfast kind of person. But nobody does that. Another, easier way to create a MOF is a
PowerShell configuration script. A configuration script is a special kind of script that, when you run
it, produces one or more MOF files. Configuration scripts can have incredibly complex logic that
governs the final MOF contents - or a configuration script can be essentially static, contain no logic,
and produce the same MOF file every single time you run it.
There’s a special flavor of MOF called a meta-configuration MOF. Again, these can be created in
Notepad or, more likely, by writing and running a PowerShell configuration script. These meta-
MOFs are used to change the configuration of the LCM itself. They’re how you tell the LCM how
often to run, where its Pull Server lives, and so on.
The second “piece” in DSC is the collection of DSC resources used by the LCM. A resource is usually
just a PowerShell script, although they can also be written in a .NET language like C# and compiled
into a DLL. One or more resources can be collected into a DSC resource module, which is what you
actually distribute. Your authoring computer will need a copy of any DSC resource modules that
you plan to use, and each target node will need a copy of any DSC resource modules that are used
by its configuration.
DSC resources provide the instructions for how the LCM checks the node’s current configuration,
and fixes the configuration if needed. Each DSC resource consists of three basic components: Get,
Set, and Test. So let’s say you have a resource designed to handle the installation and removal of
Windows features (this exists - it’s the WindowsFeature resource). Your MOF might instruct the
LCM to ensure that the Web-Server Windows feature is present. When the LCM reads that MOF,
Designing DSC 9
it will load the WindowsFeature resource and run its Test function, passing the Web-Server feature
name. The Test function will see if Web-Server is installed or not, and return either True or False. If
Test returns False, then the LCM knows that the node is not in compliance with the configuration
MOF. If it’s been configured to fix that, then the LCM will call the Set function of WindowsFeature,
again passing the Web-Server feature name. The Set function will install the feature, bringing the
node into compliance.
To summarize that: the MOF contains a description of what you want the node to look like, and
the DSC resources know how to check the configuration and, if necessary, fix it. So your abilities
with DSC are limited only by the DSC resources that you have. Anything you have a resource for,
you can manage via DSC. And, because DSC resources are just PowerShell scripts, you can always
create a DSC resource for whatever you might need.
Uniqueness in MOFs
There’s an important technical detail about MOFs that you need to know before we go any further.
Take a look at this chunk of a PowerShell configuration script:
1 WindowsFeature DnsServer {
2 Name = 'DNS'
3 Ensure = 'Present'
4 }
This is what I call a configuration setting. It specifies the configuration for one specific item. The
WindowsFeature part of that is the name of a resource, which would be contained in a resource
module, and which would be locally installed on the node. Following that is the name for this
configuration setting. I made this up - “DnsServer” is meaningful to me, so that’s what I used. This
name does need to be unique - in the final MOF, no other configuration setting can use the same
name. But I could also have used “Fred” or “Whatever” as legal names - DSC doesn’t care what the
name is, so long as it’s unique.
Inside the configuration setting are its properties, including Name and Ensure. These properties and
their values will be passed to the DSC resource by the LCM, and tell the resource what, specifically,
it’s supposed to be checking and doing. Every DSC resource is designed to have one or more keys,
which are the properties they use to identify some manageable component of the computer. In the
Designing DSC 10
case of the WindowsFeature resource, the key happens to be the Name property. Here, I’ve specified
“DNS” for the name, because “DNS” is the name of the Windows Feature I want installed.
Because Name was designed to be the key for the WindowsFeature resource, it must be unique for
all instances of the WindowsFeature resource within the final MOF. This restriction is designed to
stop you from doing stuff like this:
1 WindowsFeature DnsServer1 {
2 Name = 'DNS'
3 Ensure = 'Present'
4 }
5 WindowsFeature DnsServer2 {
6 Name = 'DNS'
7 Ensure = 'Absent'
8 }
This doesn’t make any sense! You want DNS installed - but not installed? Make up your mind!
Because this kind of configuration can throw the LCM into an infinite loop of installing and
uninstalling, DSC won’t allow it. In fact, DSC will detect the conflict upon trying to create a MOF
from a configuration with these settings, an error will be returned, and a MOF will not be output.
This is really, really important - and it can be tricky, because as you physically write a configuration
script, there’s no easy or obvious way to see exactly what the key properties are. You can look it up,
and I’ll show you how, and you have to be very careful not to create duplicates.
• Manual file copy. You simply have to drop the MOF in the correct location (and run a command
to encrypt it, but we’ll get into that later). The LCM will see the new MOF, and will process it
on the next run.
• Push mode. Usually from your authoring computer, you’ll run Start-DscConfiguration to push
one or more MOF files to target nodes, with a maximum of one MOF per node. This is a one-
time operation that relies on Remoting to provide the connection to the nodes.
• Pull mode. If you configure the LCM to do so, it can look to one (or more, which I’ll explain
in a bit) Pull Server(s) for configuration MOFs. By default, the LCM will check for a new MOF
every 30 minutes. If it finds one, it’ll download and start processing the new MOF.
Pull Servers can also contain ZIPped copies of DSC resource modules. When a node gets a MOF, it
does a quick pre-scan of the MOF to make sure all needed DSC resources are already locally installed.
If they’re not, then the LCM can be configured to download needed DSC resource modules from
Designing DSC 11
a Pull Server. The LCM then unzips and installs those locally, and then runs its MOF. The LCM
can be configured to “pull” resource modules in this way, even if it’s not “pulling” MOFs. In other
words, you can configure the LCM to wait for you to push it a MOF, but still download needed DSC
resource modules from a Pull Server.
Configuration Variations
So far, I’ve described the DSC process with the assumption that you’re creating one MOF for each
target node. That’s a good way to understand how DSC works, but it obviously might not be ideal
for every environment. After all, you probably have many different kinds of servers, but a lot of
them probably have many similarities. So some of the first design questions people often ask about
DSC tend to revolve around modularizing the MOFs.
By the way, in many of the following examples, I’m going to be referring to DSC resources that are
fictional, and even using real ones with values that might not work in the real world. The goal of this
chapter is to help you understand the design decisions you need to make, not to provide a reference
of resources or syntax. So I’ve tried to construct these examples in a way that best illustrates the
concepts. I just wanted you to know that, so you don’t try copying these into a real configuration.
I’ll provide plenty of functional examples in later chapters.
1 $ConfData =
2 @{
3 AllNodes =
4 @(
5 @{
6 NodeName = "SERVER1";
7 Role = "WebServer"
8 },
9 @{
10 NodeName = "SERVER2";
11 Role = "SQLServer"
12 }
13 )
14 }
The MOF files for each of those two nodes would therefore be different. Configuration data blocks
can be defined in the same physical file as a configuration script, but more often the data is stored
in a separate file, and then loaded and passed to the configuration script when that script is run.
While this technique does enable a single configuration script to handle a wide variety of situations,
it also becomes possible for that configuration script to become incredibly complex - and potentially
harder to maintain.
Simple Includes
One way to make a complex configuration script easier to maintain is to break it into multiple files.
Potentially, some of those chunks could then be reused across multiple configuration scripts. This
isn’t actually a complex technique; you just use standard PowerShell dot-sourcing. For example,
consider the following as a sort of “master” configuration script:
Designing DSC 13
1 configuration DatabaseServer {
2 param(
3 [string[]]$ComputerName
4 )
5 node $ComputerName {
6 . ./standard-config.ps1
7 . ./security-config.ps1
8 . ./sql-server-2012-config.ps1
9 }
10 }
When run, this configuration script would “include” (dot-source) three different PowerShell scripts.
Those scripts would not contain the outer configuration block, nor would they contain a node block.
Instead, they would simply contain configuration settings. As you can see, some of those three - like
standard-config.ps1 or security-config.ps1 - might also be used in “master” configuration scripts that
applied to different server roles.
This technique can also make use of configuration data blocks, enabling you to add per-node logic
to each of the “included” scripts.
Composite Configurations
In a composite configuration, you actually create a complete configuration script. It can even be set
up to accept input parameters. For example, consider this extremely simple configuration script:
1 configuration customsyncfile {
2 param(
3 [string]$FilePath
4 )
5 node {
6 SyncFile SyncFile1 {
7 SourceFilePath = $FilePath
8 DestFilePath = c:\inetpub\wwwroot
9 SyncMode = 'Continuous'
10 SyncMethod = 'FileHash'
11 SyncTimeout = 5000
12 SyncWait = 100
13 }
14 }
15 }
In this example, the configuration has been set up to accept a single input parameter, FilePath. Inside
the configuration, I’ve referred to a (fictional) DSC resource named SyncFile. It obviously accepts a
Designing DSC 14
number of properties; I’ve provided values for each of these, but the SourceFilePath property will
use whatever was given to the configuration’s FilePath input parameter.
This configuration would then be saved in a way that made it look like a DSC resource named
“customsyncfile.” This then becomes a component configuration. I would then write a second,
composite configuration, perhaps like this:
1 configuration WebServer {
2 param(
3 [string[]]$ComputerName
4 )
5 node $ComputerName {
6
7 customsyncfile WebServerFiles {
8 FilePath = '\\dfsroot\webserverfiles\sources\source1'
9 }
10
11 WindowsFeature IIS {
12 Name = 'Web-Server'
13 Ensure = 'Present'
14 }
15
16 }
17 }
My composite configuration has two configuration settings, one of which is using the Windows-
Feature DSC resource, and the other of which is using my customfilesync component resource. As
you can see, I’m able to provide a FilePath to the component resource, which will be used as its
SourceFilePath internally. I don’t need to specify any other properties for customfilesync, because
it wasn’t set up to accept any additional input. Everything else that the underlying SyncFile DSC
resource needs has already been hardcoded into the component configuration.
Composite configurations are a good way to modularize configuration elements. A component
configuration can contain as much or as little as you need. You could, for example, have an entire
component configuration that took care of every configuration element that your organization’s
IT Security team needed, while another one handled installation of standardized elements like
management agents, encryption certificates, and so on. Each component configuration could be
“owned” by a different team within your organization, helping to provide some separation of duties.
Keep in mind that the component configurations do need to be present on the target nodes that
use them - any DSC resources used in the component configurations also need to be installed.
Component configurations can, however, be ZIPped up and distributed by means of a Pull Server,
just like any DSC resource module.
I’ll cover the technical details of composite configurations in a later chapter.
Designing DSC 15
Partial Configurations
Partial configurations are a new feature in PowerShell v5, and they’re another way of modularizing
configurations. They work best with a Pull Server setup, although it’s possible to push partial
configurations as well, and you can also have a mixed mode where you push some partials, and
let the node pull other partials.
So what’s a partial configuration? Basically, you just create a bunch of different configuration scripts,
run them to produce MOFs, and drop all the MOFs onto a Pull Server. You then configure a node’s
LCM to pull one or more of those MOFs. That configuration gets into some technical details that
I’ll cover later; for now, just understand that a single node can pull multiple MOFs. Some of those
MOFs might be unique to that node, while others might be shared across many nodes.
When the LCM grabs all these MOFs, it first merges them into a single MOF. That’s a bit more
complex than just concatenating the files, but the LCM knows what to do.
But a lot of us in the community have mixed feelings about partial configurations, and our
trepidation comes from the uniqueness constraints I described earlier in this chapter. When you’re
authoring partial MOFs, it becomes really easy to accidentally reuse the same configuration setting
name. It also becomes easy for two separate MOFs to contain duplicate resource key properties.
For example, MOF A might set the IIS WindowsFeature to install, while MOF B sets the IIS
WindowsFeature to uninstall. You won’t necessarily be aware of that as you sit on your authoring
computer, but once the LCM merges the MOFs, it’ll detect the conflicts and completely abort the
configuration.
The problem we have with this is that the problem is occurring as far from you, the administrator,
as possible. It’s happening in the context of a background scheduled task, running on a remote
computer. Debugging that situation - or even realizing that the LCM is exploding - can be hard.
Right now, we don’t have any way of locally saying, “hey, suppose I send these 10 MOFs to a node -
will any uniqueness problems pop up?” So there’s no local “combine and test” or “what if” that you
can run.
Our feeling is that a smarter Pull Server is what’s needed. A node could check in, and say, “Hey,
I need this list of 8 MOFs, please.” The Pull Server could then load those off disk, combine them
on the server, and send the merged MOF to the LCM. But the Pull Server could also detect any
conflicts (and potentially even run some dynamic logic to alter the merged MOF based on criteria
you provide). Any problems would be happening in one central place - the Pull Server - and could
be logged, easier to get to, and easier to troubleshoot and solve. Sadly, right now, we don’t have this
kind of Pull Server.
In case you’re still interested in partial configurations, I’ll cover their technical details in an
upcoming chapter.
a dot-sourced inclusion, and that is a composite configuration that relies on underlying component
configurations. Any of those can have logic based on configuration data blocks. You can do it all.
Understanding Dependencies
There’s one more design thing you need to understand about configurations, and that’s the order
in which the LCM executes configuration settings from a MOF. Although today’s LCM is single-
threaded, Microsoft envisions a potential multi-threaded LCM someday. For that reason, you can’t
rely on the LCM simply executing configuration settings from the top of the MOF through to the
bottom, as you might expect. But you will definitely have situations where you need such-and-such
a configuration to finish before some other configuration. For example, you might want to copy an
installer package to the node, and then run the installer. Doing those in reverse won’t work, and so
you need some way of telling the LCM which one to do first. That way is called DependsOn.
1 WindowsFeature ActiveDirectoryFeature {
2 Name = 'ADDS'
3 Ensure = 'Present'
4 }
5
6 ADDomainController MyDC {
7 Ensure = 'Present'
8 DomainName = 'MyCompany'
9 DCType = 'Regular'
10 DependsOn = [WindowsFeature]ActiveDirectoryFeature
11 }
Here, I’ve specified a (fictional) DSC resource that sets up an AD domain controller. However, that
configuration setting can’t run until the WindowsFeature resource named “ActiveDirectoryFeature”
has run.
You can also specify that a configuration setting not run until multiple other settings have run:
1 DependsOn = [WindowsFeature]IIS,[WindowsFeature]DNS
When the LCM executes its MOF, it will scan for these dependencies and sort the configuration
settings accordingly. Note that there are some caveats when it comes to using DependsOn in a
composite resource; I’ll cover those in more detail in the chapter on composite configurations.
Part 2: Configuring the
Infrastructure
With some of your design decisions made, you can start getting things set up in your environment.
In this Part, we’ll discuss setting up a Pull Server and configuring the LCM. Now, we’re assuming
you’ve already started making some design decisions, and as you read through these chapters, you
should be able to start finalizing them. But one problem with DSC is that design can be a little of a
chicken-and-the-egg situation. You probably won’t know how you want your LCMs configured until
you know how you’re going to create and modularize your configurations. So while we’re going to
show you all the possibilities, we can’t recommend one to you - you may need to work through a
bit more of the book so that you can start making some concrete decisions for your environment.
Infrastructure Prerequisites
There are a few things that we assume you have in place in order for DSC to work. It’s not a huge
list, but it’s an important one.
• All nodes that will be used to author, push, or receive configurations are running PowerShell
v5 or later. Keep in mind, this book doesn’t cover v4.
• WS-Management (WS-MAN) traffic is permitted on the network. This isn’t PowerShell
Remoting; DSC uses CIM, which also runs over the WS-MAN protocol. It will be enabled by
default on nodes that have PowerShell v5 installed, but you must ensure it’s not being blocked
by firewalls or other network elements.
• For authoring nodes, we assume that they have Internet connectivity to download new DSC-
related resources, and that they have the PowerShell ISE installed (which would be the case by
default on Windows client computers).
Configuring the LCM
The Local Configuration Manager, or LCM, is the “agent” for DSC. It lives and runs on each target
node, accepts configurations (via either pull or push), and executes those configurations. In WMF
v4, the LCM ran as a Scheduled Task; as of v5, that’s not the case. You can’t schedule when the LCM
will run, but you can schedule some parameters related to how often it runs. That’s what this chapter
is all about, along with the LCM’s other configuration options.
1 ActionAfterReboot : ContinueConfiguration
2 AllowModuleOverWrite : False
3 CertificateID :
4 ConfigurationDownloadManagers : {}
5 ConfigurationID :
6 ConfigurationMode : ApplyAndMonitor
7 ConfigurationModeFrequencyMins : 15
8 Credential :
9 DebugMode : {NONE}
10 DownloadManagerCustomData :
11 DownloadManagerName :
12 LCMCompatibleVersions : {1.0, 2.0}
13 LCMState : Idle
14 LCMStateDetail :
15 LCMVersion : 2.0
16 StatusRetentionTimeInDays : 10
17 PartialConfigurations :
18 RebootNodeIfNeeded : False
19 RefreshFrequencyMins : 30
20 RefreshMode : PUSH
21 ReportManagers : {}
22 ResourceModuleManagers : {}
23 PSComputerName :
Configuring the LCM 20
Let’s walk through each of those. But, before we do that, I have to point out that this set of
configuration properties is from a Windows computer. Microsoft produces an LCM for Linux, too,
and that LCM has a smaller set of properties (at least, as of this writing; it’s unclear at this point
how hard Microsoft will try to keep the Linux LCM in feature-lockstep with the Windows LCM).
Also, notice the lack of an AgentId - that’s because this is a fresh machine, and it doesn’t yet have
one.
ActionAfterReboot
This determines what the LCM will do after the computer is rebooted. The default, ContinueConfig-
uration, will let the LCM pick up where it left off and continue processing the current configuration.
You can also choose StopConfiguration, which, as the name implies, stops the configuration from
running further. I imagine you’d really only use StopConfiguration in certain troubleshooting or lab
situations.
AllowModuleOverWrite
When set to $False - which is the default - the LCM will not download any DSC resource modules
that already exist locally. You might wonder why the heck that’s the default. Well, it’s for safety.
The presumption is that, once you put a resource onto a node, the node depends on that exact
resource in order to run configurations properly. If you change or update a resource, you’re meant to
apply a new version to it. Your configurations can then reference that new version, and if properly
configured, the LCM will download that new version. This AllowModuleOverwrite setting only
applies to downloading new copies of the exact same module. In a test environment, where you’re
frequently iterating modules, you’ll probably set this to $True.
Why would you not set this to $True in production? Well… predictability and stability. If an
LCM is allowed to overwrite local modules, that can be convenient for you when making ad-hoc
changes. But that subverts the entire best practice of code testing, versioning, and so on. Ideally, if
Configuration A relies on Module B version 1.2, you don’t want Module B version 1.3 showing up
out of the blue. It might not work the way Configuration A expects - for example, Module B version
1.3 might require different properties or property values, which Configuration A doesn’t provide.
The correct practice would be to deploy Module B v1.3 as a separate thing, living independently of,
and side-by-side with, v1.2. You would then author Configuration C, which specifically references
Module B v1.3, and deploy Configuration C to your nodes. That way you always know what each
node is using, and troubleshooting becomes a lot easier.
CertificateID
This is the certificate thumbprint (which you can obtain by exploring the CERT: drive) of the
encryption certificate that the LCM will use to decrypt any encrypted information from its MOF.
Most commonly, this is used to decrypt encrypted logon credentials. We’ll cover certificates pretty
thoroughly in an upcoming chapter.
Configuring the LCM 21
ConfigurationDownloadManagers
This is a collection of pull servers - that is, servers from which MOF files will be pulled.
ConfigurationID
This is a Globally Unique Identifier (GUID) that you assign to the LCM. In WMF v4, this was the
only way to tell the LCM which MOF to pull from a pull server. In WMF v5, you can also use
configuration names. However, when using ConfigurationID, the node will essentially look on the
pull server for a MOF file whose filename is the same GUID. Note that a GUID isn’t just a random
string of hexadecimal characters; it’s a specific thing that must be generated. In PowerShell, you can
use [guid]::newGuid() to generate one. This is really important: If you specify a ConfigurationID,
you can run against a v4 or v5 pull server, and the v5 pull server will follow v4 behaviors. If you
omit the ConfigurationID, whether you include Configuration Names or not, the pull server must be
v5 or later, will behave as v5 or later, and requires a registration key to be specified in a Repository
section.
ConfigurationMode
This controls the LCM’s actions. The following values are allowed:
• Disabled. The LCM does not run. This is perhaps most often used in cases where a third-party
management technology, like Chef, is actually running the show. Chef can use DSC resource
modules under the hood, but it doesn’t want the actual LCM stepping in and interfering.
• ApplyOnly. The LCM applies the current configuration, and then stops running until manually
run.
• ApplyAndMonitor. The LCM applies the current configuration, and re-checks it on the
ConfigurationModeFrequencyMinutes value. It does not attempt to fix the configuration, but
will report a status to a Reporting Server if so configured.
• ApplyAndAutoCorrect. The LCM applies the current configuration, and re-checks it on the
ConfigurationModeFrequencyMinutes value. It will attempt to fix out-of-compliance configu-
ration settings and, if configured, will report status to a Reporting Server.
ConfigurationModeFrequencyMins
This controls how often the LCM will re-evaluate its current configuration. It defaults to, and has a
minimum value of, 15 minutes. This is important, conceptually, because it means the LCM cannot
guarantee continuous compliance with your desired configuration. If someone changes something,
that change can “last” for 15+ minutes until the next consistency check. So in cases where the “desired
configuration” could mean the difference between literal life or death, or someone going to jail or
not, DSC might not be the right technology. In many cases, you could extend this to 24 hours or
more (calculated in minutes) for cases when you simply need a once-a-day check on configuration
compliance.
Configuring the LCM 22
Credential
This appears to be obsolete, since the individual resource blocks (which we’ll get into in a bit) can
define this on a per-instance basis.
DebugMode
Whether or not the LCM’s debug mode is on or off ($True or $False).
DownloadManagerCustomData
This appears to be obsolete in v5.
DownloadManagerName
This appears to be obsolete in v5.
LCMCompatibleVersions
The LCM versions that this LCM is compatible with. For example, LCM 2.0, which ships in WMF
v5, is compatible with LCM 1.0, meaning it can accept the same configuration documents that the
older LCM accepted, in addition to newer, 2.0-only ones.
LCMState
This tells you what the LCM is currently up to:
• Idle
• Busy
• PendingReboot
• PendingConfiguration
LCMStateDetail
This is usually a more descriptive version of what the LCM is doing right then, like “LCM is applying
a new configuration.”
LCMVersion
The version of the LCM.
Configuring the LCM 23
StatusRetentionTimeInDays
This controls how long the LCM will retain current status information. The default is 10 days.
PartialConfigurations
This doesn’t presently do anything.
RebootNodeIfNeeded
When set to $True, the LCM will automatically restart the target node if a DSC resource indicates
that a restart is required to finalize the configuration. This is set to $False by default, on the very
reasonable grounds that most people want a little more control over when their servers restart.
RefreshFrequencyMins
This controls how often the LCM will check the pull server for new MOFs. It defaults to 30 minutes,
and in WMF v4 must be set to an even multiple of the ConfigurationModeFrequencyMinutes value
(15 * 2 = 30, hence the default of 30). This value should always be larger than the Configuration-
ModeFrequencyMinutes.
RefreshMode
Push or Pull. As easy as that. This defaults to Push, meaning that the LCM just sits and waits for
you to hit it up with a configuration.
ReportManagers
This is a collection of configured reporting servers.
ResourceModuleManagers
This is a collection of pull servers from which modules (DSC resources) can be downloaded.
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node Node1
5 {
6 Settings
7 {
8 RefreshMode = 'Push'
9 }
10 }
11 }
In this case, the [DSCLocalConfigurationManager()] bit tells PowerShell that this is to be a meta-
MOF, which is generated somewhat differently than a normal configuration MOF. The name of
the configuration, LCMConfig, is arbitrary. You simply need to specify that name - usually at the
very bottom of the script file - to run the config and generate the MOF (it’s basically like running a
function). Within the Settings{} block is where you can put whatever settings you like from the list
above.
It’s also possible to “inject” the meta-MOF into a node by copying it into the place where the LCM
looks for it (C:/Windows/System32/Configuration/metaconfig.mof). For example, this is a great way
to help configure the LCM as part of an automated VM build, since you can use things like the
Hyper-V integration cmdlets to copy files into the VM.
NOTE: I should point out that there can be a bunch of caveats around “injecting” a MOF by
simply copying a file. I’ve found it’s nearly always better to push the meta-MOF to the LCM as
part of an automated build process, or just to copy the meta-MOF to an arbitrary local file on the
node and have the VM “push” that to itself. And keep in mind that you can also “sort of push”
a meta-MOF into a VM node, even if WS-MAN isn’t enabled on the node, by using integration
components like those provided by Hyper-V. If you want to read more on injection, though,
have a look at http://dille.name/blog/2014/12/07/injecting-powershell-dsc-meta-and-node-config-
urations/ and https://blogs.msdn.microsoft.com/powershell/2014/02/28/want-to-automatically-con-
figure-your-machines-using-dsc-at-initial-boot-up/, although those articles refer to WMF v4, so the
actual examples are slightly outdated.
Configuring the LCM 25
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node localhost
5 {
6 Settings {
7 RefreshMode = 'Pull'
8 }
9
10 ConfigurationRepositoryWeb MainPull {
11 AllowUnsecureConnection = $false
12 ConfigurationNames = @('Server2Configuration')
13 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
14 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
15 }
16
17 ConfigurationRepositoryShare FilePull {
18 SourcePath = '\\organization\path\folder\'
19 }
20 }
21 }
The above example shows both a Web and a file share-based pull server configuration. You can mix
and match; note that the file share-based version also allows a Credential property, where you can
specify a credential to use to access the share (if you don’t, then the share must allow anonymous,
un-authenticated access). Frankly, file share-based pull servers are rare in my experience, and they
offer less functionality. I’ll assume you’re using a Web-based pull server, which means the pull server
is running IIS, and you’ve installed the DSC Pull Server feature on the server (we’ll cover pull server
setup in an upcoming chapter).
Also notice that each block has a name. This is arbitrary, and meaningful to me; DSC doesn’t care
what you name them. I’ve used “MainPull” and “FilePull.”
Web pull servers support these properties:
• AllowUnsecureConnection. This should always be $False, and is by default. This is a lot less
about ensuring the connection is encrypted (although it also means that), and more about
Configuring the LCM 26
ensuring that the node is connecting to a known, trusted pull server. With this set to $True,
it becomes very easy to use man-in-the-middle attacks to inject malicious configurations into
your environment. When you wind up on CNN because you got p0wned by means of DSC,
just remember that I Told You So, and stop using unsecure connections.
• CertificateID. This is the GUID (thumbprint) of a local certificate, which the node will send to
the pull server to authenticate the node to the server. This is a great way to prevent unknown
nodes from even trying to contact your pull server. This is nothing fancier than good old IIS
certificate-based authentication.
• ConfigurationNames. When set, this means you should remove the ConfigurationID GUID
from the Settings{} block, switching the node over to requesting one or more MOFs by
filename. Notice this is specified as an array. When you use this, you must also specify the
RegistrationKey. You must also use a pull server that has WMF v5. Finally, note that while
ConfigurationNames accepts an array, it is illegal to specify more than one, unless you are
specifying partial configurations (covered in a bit).
• RegistrationKey. This is a GUID (use [guid]::newGuid() to generate it) that is configured in
advance on the pull server. It acts as a kind of password for the node to “log in” and retrieve
MOFs. The theory is that, when you just use a ConfigurationID (GUID), it’s really hard for
a bad actor to guess at your MOF filenames. But when you start using ConfigurationNames,
which are likely to be meaningful words and phrases, guessing filenames becomes easier. The
RegistrationKey makes it harder for a bad guy to download your MOFs and inspect them (which
would teach them a lot about your environment and facilitate an attack). Note that I’ve seen
some documentation which suggests that you can remove RegistrationKey once the node is
registered; in analyzing the code for the Linux LCM, and in speaking with some of the team,
that’s not true. The RegistrationKey is used every time the node runs a consistency check and
hits the Pull Server.
• ServerURL. The URL of the pull server, which should start with https:// unless you’re okay
being hacked. And yes, that means the IIS web site that the pull server is running under
must have an SSL certificate that is issued by a root Certificate Authority that the node trusts.
The URL must end in ‘PSDSCPullServer.svc’ if you’re using the Microsoft-supplied pull server
service.
• SignedItemType. New in WMF 5.1, this lets you tell the LCM to expect digital signatures
from files delivered from the pull server or pushed to the LCM. This can be “Configuration”,
“Module”, or both (e.g., “Configuration”,”Module”). See the chapter, “Security and DSC,” for
information on digital signing for these items.
• TrustedStorePath. New in WMF 5.1, this goes along with SignedItemType. This is optional; if
you don’t include it, the LCM will use its normal certificate trust procedures to verify signed
items. However, if you provide this (e.g., CERT:/LocalMachine/DSCStore), then the LCM will
use that path to retrieve trusted publishers. This is an excellent idea, because you can restrict
the CAs that are trusted to only those you control.
Notice that only SMB (file share) pull servers support a credential; Web pull servers do not. But
Web pull servers support configuration names, certificate authentication, encryption, and lots more.
They are Better In Every Way.
Configuring the LCM 27
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node localhost
5 {
6 Settings {
7 RefreshMode = 'Pull'
8 }
9
10 ConfigurationRepositoryWeb MainPull {
11 AllowUnsecureConnection = $false
12 ConfigurationNames = @('Server2Configuration')
13 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
14 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
15 }
16
17 ResourceRepositoryWeb ModuleSource {
18 AllowUnsecureConnection = $false
19 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
20 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
21 }
22
23 }
24 }
Notice that the RegistrationKey and ServerURL are the same in both the ConfigurationRepository-
Web and ResourceRepositoryWeb blocks? I’m using the exact same pull server. I can also specify a
CertificateID setting, if the pull server requires client certificate authentication. You can also define
a ResourceRepositoryShare{} block, which would be for a file share-based pull server, and it accepts
Credential and SourcePath properties.
Again: it’s totally legal in WMF v5 to specify a ResourceRepositoryWeb{} block, completely omit
any other configurations, and leave the LCM in Push mode. When you Push a configuration to the
node, it’ll scan to see what resources are needed. If it’s missing any of those resources, the node will
connect to the defined server and attempt to download the ZIPped resource module. It’ll then unzip
it and proceed with the configuration.
Configuring the LCM 28
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node localhost
5 {
6 Settings {
7 RefreshMode = 'Pull'
8 }
9
10 ConfigurationRepositoryWeb MainPull {
11 AllowUnsecureConnection = $false
12 ConfigurationNames = @('Server2Configuration')
13 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
14 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
15 }
16
17 ResourceRepositoryWeb ModuleSource {
18 AllowUnsecureConnection = $false
19 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
20 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
21 }
22
23 ReportServerWeb CheckInPoint {
24 AllowUnsecureConnection = $false
25 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
26 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
27 }
28
29 }
30 }
You can also specify a CertificateID, if the server requires client certificate authentication. Notice
that the service endpoint - PSDSCPullServer.svc - is the same as it was for the regular pull server
function. That’s a change in WMF v5; previously, the “compliance server,” as it was then known, had
its own endpoint.
Configuring the LCM 29
Partial Configurations
The last thing you might want to do with the LCM is to configure partial configurations. We’ve
outlined previously what these are (and why we’re not sure they’re a good idea).
It’s worth noting that partials can get really complex. What we’re showing here is an illustration;
we’re not sure it’s something you’d find in a real-world environment. We’re just trying to convey
the sense of the thing.
Here’s how you set them up:
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node localhost
5 {
6 Settings {
7 RefreshMode = 'Pull'
8 }
9
10 ConfigurationRepositoryWeb MainPull {
11 AllowUnsecureConnection = $false
12 ConfigurationNames = @('PartialOne','PartialTwo')
13 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
14 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
15 }
16
17 ResourceRepositoryWeb ModuleSource {
18 AllowUnsecureConnection = $false
19 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
20 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
21 }
22
23 ReportServerWeb CheckInPoint {
24 AllowUnsecureConnection = $false
25 RegistrationKey = '140a952b-b9d6-406b-b416-e0f759c9c0e4'
26 ServerURL = 'https://mypullserver.company.pri/PSDSCPullServer.svc'
27 }
28
29 PartialConfiguration PartialOne {
30 ConfigurationSource = @('[ConfigurationRepositoryWeb]MainPull')
31 Description = 'PartialOne'
32 RefreshMode = 'Pull'
Configuring the LCM 30
33 ResourceModuleSource = @('[ResourceRepositoryWeb]ModuleSource')
34 }
35
36 PartialConfiguration PartialTwo {
37 ConfigurationSource = @('[ConfigurationRepositoryWeb]MainPull')
38 DependsOn = '[PartialConfiguration]PartialOne'
39 Description = 'PartialTwo'
40 ExclusiveResources = @('OrderEntryAdmin','OrderEntryConfig')
41 RefreshMode = 'Pull'
42 ResourceModuleSource = @('[ResourceRepositoryWeb]ModuleSource')
43 }
44
45 }
46 }
• I’ve defined two partial configurations, which I’ve arbitrarily named PartialOne and Par-
tialTwo. If you go back and look at the pull server configuration, you’ll see that these are
now listed as the ConfigurationNames for the node. PartialOne.mof and PartialTwo.mof must
live on that pull server, along with their checksum files.
• Each partial configuration defines which pull server it’ll come from. You have to refer to a pull
server that you already defined - in this case, I’ve only defined MainPull, and so both partials
must come from there. Again, PartialOne.mof and PartialTwo.mof must live on that pull server,
along with their checksum files.
• PartialTwo won’t execute until PartialOne has finished - the DependsOn sees to that.
• PartialTwo is the only configuration that can contain settings using the OrderEntryAdmin and
OrderEntryConfig DSC resources. No other configuration may use those resources, due to the
ExclusiveResources setting.
• Both partials specify my ModuleSource definition as the place to find any missing DSC resource
modules.
• Both partials specify that the configuration be pulled. However, it’s legal for some partials to
expect to be pushed. Any partials set to Push mode will simply sit and wait for you to push the
MOF manually. They wouldn’t have a ConfigurationSource, in that case.
As you can see, these can become insanely complicated and hard to keep track of. In addition, as I’ve
pointed out before, the actual partial configuration MOFs must be carefully - manually! - evaluated
to ensure they don’t contain conflicting keys or other problems. The LCM will download all the
MOFs, combine them, and then scan them for problems. Any problems will result in the entire
configuration - all partials! - being abandoned.
Note that you can also configure a partial configuration to be pushed, making it possible to push
some, and pull some, of the total configuration. When a partial is configured for push, the node
Configuring the LCM 31
will always use the most recently pushed one when it combines all of its partials into the total
configuration.
1 [DSCLocalConfigurationManager()]
2 configuration PartialConfigDemo
3 {
4 Node localhost
5 {
6
7 PartialConfiguration PushPart {
8 Description = 'Configuration being pushed'
9 RefreshMode = 'Push'
10 }
11 }
12 }
That’s how the LCM would be configured to understand that one of its partial configurations will be
pushed to it. The configuration you push - e.g., the filename - must match. So, in this case, I’d have
to push pushpart.mof. I’d use Publish-DscConfiguration to push the file to the node, and then run
Update-DscConfiguration to let any additional partials (obviously not shown in the above snippet)
be pulled, and for the consistency check to begin.
And actually… that’s an oversimplification, the file naming bit. Assuming the node is set to use
ConfigurationNames, the filename above would be pushpart.mof. But, if you’d set the LCM to use
a ConfigurationID (that is, a GUID), you’d name it pushpart.GUID.mof, inserting the GUID in the
filename.
You can also sequence your partial configurations. Check this out:
1 [DSCLocalConfigurationManager()]
2 configuration PartialConfigDemo
3 {
4 Node localhost
5 {
6
7 ConfigurationRepositoryWeb MyPull {
8 ServerURL = 'https://whatever:8080/PSDSCPullServer.svc'
9 }
10
11 PartialConfiguration PushPart1 {
12 Description = 'Configuration being pushed'
13 RefreshMode = 'Push'
14 }
15
Configuring the LCM 32
16 PartialConfiguration PushPart2 {
17 Description = 'Configuration being pulled'
18 RefreshMode = 'Pull'
19 ConfigurationSource = '[ConfigurationRepositoryWeb]MyPull'
20 DependsOn = '[PartialConfiguration]PushPart1'
21 }
22 }
23 }
I’m still omitting the Setting{} block in this snippet, but you can see that PushPart2 won’t be applied
until after PushPart1.
A Word of Caution
We’ll be using the xPSDesiredStateConfiguration module to create a pull server. Thing is, some
versions of that module have been epically broken, especially when running on Server Core.
Microsoft resolved the problems, but this highlights the absolute necessity of checking PowerShell
Gallery for the latest bits, before you try and do anything. And, if you run into problems, hop into
that module’s repo on GitHub.com/powershell and see if anyone’s reported a relevant bug against
that version. If not… file a bug yourself.
⁴https://blogs.msdn.microsoft.com/powershell/2018/04/19/windows-pull-server-planning-update-april-2018/
Setting Up a Pull Server 34
1 Install-Module xPSDesiredStateConfiguration
1 configuration GoGoDscPullServer
2 {
3 param
4 (
5 [string[]]$NodeName = 'SERVER1',
6
7 [ValidateNotNullOrEmpty()]
8 [string] $certificateThumbPrint,
9
10 [Parameter(Mandatory)]
11 [ValidateNotNullOrEmpty()]
12 [string] $RegistrationKey
13 )
14
15
16 Import-DSCResource -ModuleName xPSDesiredStateConfiguration
17
18 Node $NodeName
19 {
20 WindowsFeature DSCServiceFeature
21 {
22 Ensure = 'Present'
23 Name = 'DSC-Service'
24 }
25
26 xDscWebService PSDSCPullServer
27 {
28 Ensure = 'Present'
29 EndpointName = 'PSDSCPullServer'
30 Port = 8080
31 PhysicalPath = "$env:SystemDrive\inetpub\PSDSCPullServer"
32 CertificateThumbPrint = $certificateThumbPrint
33 ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscServi\
34 ce\Modules"
35 ConfigurationPath = "$env:PROGRAMFILES\WindowsPowerShell\DscServi\
36 ce\Configuration"
37 State = 'Started'
38 DependsOn = '[WindowsFeature]DSCServiceFeature'
39 UseSecurityBestPractices = $True
40 }
41
42 File RegistrationKeyFile
43 {
Setting Up a Pull Server 36
44 Ensure = 'Present'
45 Type = 'File'
46 DestinationPath = "$env:ProgramFiles\WindowsPowerShell\DscService\Regist\
47 rationKeys.txt"
48 Contents = $RegistrationKey
49 }
50 }
51 }
52
53 GoGoDscPullServer -cert paste_ssl_thumbprint_here -reg paste_reg_key_here
At the very bottom is the line of code that runs the configuration. You’ll notice I’m passing in two
parameters: the certificate thumbprint of the SSL certificate, and the registration key GUID you
generated earlier. Let’s walk through some of what’s hardcoded in the configuration.
SPECIAL NOTE: There can be times when it’s helpful to have the pull server running in HTTP,
and that’s when you’re trying to troubleshoot it and need to do a packet capture using a tool like
Wireshark. Specify a CertificateThumbprint of “AllowUnencrypted” to do that. DO NOT do that
anywhere but in a walled-off lab environment, please! And, don’t forget that the LCM on your nodes
must be configured to allow unsecure communications, and you’ll need to change the download
manager URLs in the LCM to use http:// instead of https://. This is how I created some of the packet
captures in the Troubleshooting chapter.
To begin with, there are several paths:
1 PhysicalPath = "$env:SystemDrive\inetpub\PSDSCPullServer"
2 ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"
3 ConfigurationPath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"
PhysicalPath is where the pull server endpoint will be installed. ModulePath is where you’ll drop
your ZIPped-up DSC resource modules so that nodes can download them. ConfigurationPath is
where you’ll drop all your MOF files.
You can also specify the port on which the pull server will run. Notice that 8080 is nonstandard (it
isn’t 80 or 443); you can use whatever you like.
1 Port = 8080
You also can specify the location of the Registration Keys file. You’ll notice in the example that
it’s being pre-populated with the Registration Key you generated previously; the file can actually
contain as many keys as you want.
Setting Up a Pull Server 37
1 DestinationPath = "$env:ProgramFiles\WindowsPowerShell\DscService\RegistrationKeys.t\
2 xt"
There’s some argument in the community about how to handle these. Some folks are comfortable
creating one key and letting all their nodes use it. Others want to generate a unique key for each node.
I think the latter is a lot of work and maintenance waiting to happen, and in a large environment I
have some concerns about the overhead of a large key file. It’s not presently clear what the potential
risks could be in using one, or a small number of, registration keys.
There’s another additional required setting for the pull server configuration: UseSecurityBest-
Practices. This is set to either $true or $false. This setting, when set to $True, will disable all
insecure protocols (at this time, all SSL protocols and TLS 1.0). We’ve seen some people have
problems with the pull server when configured this way (specifically, some nodes not being
able to register), requiring them also to add DisableSecurityBestPractices=’SecureTLSProtocols’
to disable TLS (which does not disable SSL). Note that these may not affect an existing pull
server; if you’re having problems and need to reconfigure, you may need to remove an ex-
isting pull server entirely and rebuild it. Or, go into the registry key affected by the setting
and remove any keys containing “TLS” in them. Essentially, UseSecurityBestPractices sets that
registry key, HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL, to en-
force stronger encryption - but it can negatively affect older clients that don’t support those.
https://support.microsoft.com/en-us/kb/245030 and
https://technet.microsoft.com/en-us/library/dn786418(v=ws.11).aspx contain more detail.
This’ll take a while to run; you may want to add -Verbose and -Wait to the command to watch it all
happen. If IIS isn’t installed on the server, it’ll get installed (as a pre-requisite of the DSC pull server
feature), and that’s what can take several minutes.
And I’m deadly serious about the SSL thing. There’s an upcoming chapter on DSC security where
I’ll dig into more specifics on why SSL is so important, and even make a case for client certificate
authentication on the pull server.
Life Choices
By default, Pull Server (in v5 and later) uses an EDB database. That’s the database architecture used
by Exchange Server and Active Directory, and it was the only option supported on early versions of
Nano Server (when we all thought Nano Server would be a standalone install option one day). EDB
databases require maintenance, meaning that if you don’t clear the logs or set them to auto-cycle,
it’ll fill up the disk. Yikes. Not a wholly popular decision.
It can also be configured to use the “other” JET database format, MDB, which is what you know and
love from Microsoft Access. The less said about that the better, although at least it doesn’t fill up
your disk.
More recently (April 2018, in Windows Server build 17090), they added the ability to instead use
a SQL Server, which is a great option that we recommend heartily. You can use SQL Express,
or an existing SQL Server installation on your network. When you create your xDscWebService
configuration items, add two keys:
1 SqlProvider = $true
2 SqlConnectionString = "Provider=SQLNCLI11;Data Source=(local)\SQLExpress;User ID=SA;\
3 Password=Password12!;Initial Catalog=master;"
That connection string obviously needs to be valid in your environment; this example is using a local
SQL Express instance, and logging in with the horrible, you-should-never-do-this “SA” account. We
strongly recommend creating a dedicated account for the pull server to log into SQL Server with or,
best yet, getting Windows Integrated authentication to work. But that’s a SQL Server issue, not a
PowerShell one, so we’ll leave it to you and your DBA.
Opting Out of the Pull Server
Approach
The way we’ve been going on, you may assume that a pull server is a mandatory part of the DSC
infrastructure. It isn’t. A pull server is useful when - and only when - you plan to create updated
configuration MOFs from time to time, especially if that’s frequently, and you want nodes to grab
them automagically and implement them.
We tend to assume that this is the “best” way because we use Group Policy that way. But for server
workloads in particular, you may decide you don’t even want a pull server at all. For example, DSC
currently doesn’t support anything like maintenance windows or scheduled downtime, so constantly
pulling and implementing configurations might be disruptive in your environment.
Instead, you could choose simply to push new configurations whenever you want. And this isn’t
necessarily a manual procedure, either - keep in mind, DSC is a platform, not a solution that includes
ready-made tools. We’re aware of at least one organization that has constructed its own (sadly
proprietary) “push server.” That server is configured with maintenance window information and a
database of configurations. When a configuration changes, the server waits until a window is open,
and then pushes the new configuration to the node. It has a lot of error handling and alerting so that
if a push attempt fails, everyone’s informed.
The point of this short chapter is to emphasize that DSC is what you need it to be, provided you’re
willing (A) to be creative, and (B) to implement your own solutions on top of DSC, as needed.
Part 3: Testing the
Infrastructure
In this Part, we’ll walk through a couple of very basic scenarios. We’ll assume that you have at least a
client computer and a server, and that they’re both running PowerShell v5. We’ll set up a very basic
configuration MOF and push it to the server. In doing so, that’ll create a very basic (and not secure
- so do this in an isolated lab) pull server, which we’ll then use to test that functionality quickly.
It’s worth noting that we assume you have a Windows 10 client, and a Windows Server 2016 server,
available. This should also work with Windows Server 2012 R2, and with older clients running
PowerShell v5 or later, but we haven’t specifically tested it. We do not cover setting up a complete
lab environment; if you’re playing with DSC, you should already be comfortable with that.
Testing Push Mode
As we described in the Part opener, in this chapter we will show you how to:
• Create a very basic configuration MOF that sets up a very basic, not-secure Pull server (securing
it will require certificates, which we haven’t gotten to, yet), and
• Push the configuration to a server
Whether you follow along on your own computer or not is entirely optional.
1 Find-module xPSDesiredStateConfiguration
1 Install-module xPSDesiredStateConfiguration
Second, this simple example is setting up a non-secure - that is, HTTP instead of HTTPS - pull server.
This is not a recommended configuration for any production infrastructure setup - or for any lab
that tests anything that might one day go into production. This configuration is provided solely as an
example for learning purposes. Remember, you do not want your compromised pull server to be the
reason your company ends up on the evening news. If you decide to follow along with this example,
do so in an isolated, disconnected lab environment. Seriously, OK?
The configuration to set up a simple pull server is (and please, remember our earlier note about
wacky code formatting - we’ve included this in the GitHub sample code repo for your convenience):
Testing Push Mode 42
1 Configuration PullServer {
2
3 Import-DscResource -ModuleName xPSDesiredStateConfiguration `
4 -ModuleVersion 5.1.0.0
5
6 Node Pull {
7
8 WindowsFeature DSCService {
9 Name = "DSC-Service"
10 Ensure = 'Present'
11 }
12
13 xDSCWebService Pullserver {
14 Ensure = 'Present'
15 EndpointName = 'PSDSCPullServer'
16 Port = '8080'
17 PhysicalPath = "$env:SystemDrive\inetpub\wwwroot\PSDSCPullServer"
18 ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"
19 ConfigurationPath = `
20 "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"
21 State = "Started"
22 DependsOn = "[WindowsFeature]DSCService"
23 UseSecurityBestPractices = $false
24 CertificateThumbprint = "AllowUnencryptedTraffic"
25 }
26
27 File RegistrationKey {
28 Ensure = 'Present'
29 DestinationPath = `
30 "$env:PROGRAMFILES\WindowsPowershell\DscService\registrationKeys.txt"
31 Contents = '0f9ae841-785d-4a2d-8cdf-ecae01f44cdb'
32 Type = 'File'
33 }
34 }
35 }
36 PullServer
In this example, the configuration name is PullServer, and the node’s name is Pull (obviously,
you can’t run this as-is unless that’s how you’ve named the server upon which you plan to inflict
this configuration). It will install the DSC-Service Windows Feature, which includes IIS and other
bits required for the Pull Server to work. After that, it uses the xDSCWebService resource, which
is located in the xPSDesiredStateConfiguration resource that we downloaded, to configure the pull
server endpoint. It will also create a file, registrationKeys.txt. This file can contain a single GUID or
Testing Push Mode 43
multiple GUIDs. The GUIDs are the shared secrets that will allow nodes to register with the pull
server.
A couple of additonal notes about this configuration:
If your authoring machine is separate from your pull server, you can deploy the configuration from
the authoring machine to the pull server remotely by adding the -ComputerName Pull parameter to
the above command.
This will configure IIS, firewall rules, and the PSDSCPullServer endpoint. Once the configuration
has completed successfully, you can verify it from the client computer - you don’t even need to log
on to the server! Assuming the client computer has basic network connectivity to the pull server
(and again assuming you named it “pull” like we did, and that your network knows how to resolve
“pull” to an IP address), launch a browser and navigate to the pull server URL:
Testing Push Mode 44
1 http://pull:8080/PSDSCPullServer.svc
If the endpoint is reachable and functioning, you will receive an XML document in response.
Testing Pull Mode
This chapter picks up from the previous one. Here, we will:
This section assumes that you are authoring your configurations from a machine that is neither the
pull server nor the target node.
Once the module is installed, you can see what resources it contains - and the syntax for each - by
using get-DSCResource.
The output of this command shows that there is one resource in this module (xTimeZone), and it
also shows you the valid properties for this resource and the syntax for using the resource in the
configuration.
You may need to do additional investigation in the resource’s documentation to find out exactly
what needs to be set. For example:
Testing Pull Mode 46
To obtain further information, check out the module itself, typically located in %systemdrive%Program
FilesWindowsPowershellModules. In the xTimeZone\1.6.0.0 folder, there is a readme.md file and
upon reading it, you discover that not only is there information about how to find the valid strings
for timezone values, there is also a sample configuration included. Not all downloaded resources are
this helpful, so your mileage may vary.
Continuing with the example, Paris seems like a great vacation destination, so we’re going to
configure the client computer to be in the timezone for Paris (Central European Time). Using the
command provided in the readme, we find a valid string (Central European Standard Time) to use
in the configuration.
1 Configuration ParisTZ {
2
3 Import-DscResource -ModuleName "xTimeZone" -ModuleVersion "1.6.0.0"
4
5 Node TimeZoneConfig {
6
7 xTimeZone Paris {
8 IsSingleInstance = 'Yes'
9 TimeZone = 'Central European Standard Time'
10 }
11 }
12 }
13
14 ParisTZ
Notice that the node definition uses “TimeZoneConfig” as the node “name”, and not the actual
node name CLI1. This is because you will use configuration names on the LCM definition, and the
configuration name for this config will be “TimeZoneConfig”.
If we refer to the Pull Server configuration for a moment, we note that we specified the Configuration
Path (where the configurations go) and Module Path (where the modules go) in the pull server
configuration:
1 ModulePath = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"
2 ConfigurationPath = `
3 "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"
You will need these paths to know where to deploy the MOFs and modules on the pull server. You’ll
also need the proper rights on the pull server in order to copy the files to it. For this example, because
we’re using the administrative share (C$) to connect, that means you need to have administrative
rights. This is not a best practice, and is only for use in this isolated lab. Configurations provide
a blueprint for your entire environment, so access to the pull server and its configurations and
modules should be locked down properly using a least-privilege security model.
Copy the MOF/checkum files to the Configuration folder.
Then zip up the xTimeZone module and copy it to the pull server (all using the compress-archive
command below) and create a checksum file for the zipped module. The naming convention for the
zipped module on the pull server MUST be ModuleName_VersionNumber.zip.
1 Compress-Archive -Path `
2 "$Env:ProgramFiles\WindowsPowershell\Modules\xTimezone\1.6.0.0\*" `
3 -DestinationPath `
4 "\\pull\C$\program `
5 Files\WindowsPowershell\DSCService\Modules\xTimeZone_1.6.0.0.zip" `
6 -force
7
8 New-DscChecksum -Path `
9 "\\pull\C$\program `
10 Files\WindowsPowershell\DSCService\Modules\xTimeZone_1.6.0.0.zip"
Creating a Meta-Configuration
In order to configure the client’s LCM to get its configuration from a pull server, we’ll revisit a few
settings from the Configuring the LCM chapter.
Testing Pull Mode 48
1 [DSCLocalConfigurationManager()]
2
3 Configuration LCM_Pull {
4
5 Node CLI1 {
6
7 Settings {
8 ConfigurationMode = 'ApplyAndAutoCorrect'
9 RefreshMode = 'Pull'
10 }
11
12 ConfigurationRepositoryWeb PullServer {
13 ServerURL = 'http://pull:8080/PsDscPullserver.svc'
14 AllowUnsecureConnection = $True
15 RegistrationKey = '0f9ae841-785d-4a2d-8cdf-ecae01f44cdb'
16 ConfigurationNames = @('TimeZoneConfig')
17 }
18
19 ResourceRepositoryWeb PullServerModules {
20 ServerURL = 'http://pull:8080/PsDscPullserver.svc'
21 AllowUnsecureConnection = $True
22 RegistrationKey = '0f9ae841-785d-4a2d-8cdf-ecae01f44cdb'
23 }
24 }
25 }
26
27 LCM_Pull
We’re using the AllowUnsecureConnection = $True setting since the pull server is only configured
for HTTP. The RegistrationKey setting must match a registration key that exists in the Registra-
tionKeys.txt file on the pull server. This is the same GUID that we used when creating the pull server.
This setting is checked at registration time to ensure that the node is authorized to connect to the
pull server, and later on subsequent communication with the pull server to retrieve configurations
and modules.
As with all previous examples, execute this configuration in order to create Cli1.meta.mof in the
LCM_Pull subdirectory.
This will apply the configuration, and if all the above instructions were completed successfully, the
configuration will deploy successfuly, and your client node will be in the Central European Standard
Time timezone.
This command will show the current status of the configuration (success if it was successful),
along with the Refresh Mode (Pull or Push) and type of configuration update (Initial, Consistency,
LocalConfigurationManager).
Part 4: Authoring
Configurations
Configurations wind up being the biggest unit of work in DSC, meaning they’re where you’ll spend
most of your time. We’ll be spending a lot of time, in this book, covering them and the various ways
in which you can author and deliver them. Let’s get started.
Basic Configuration Authoring
In this chapter, we’ll start digging into configuration authoring. There’ll be a more advanced chapter
on this later in the book, so we’ll stick with the basics for now, and then build on those as we go.
If you’re looking to try some of this stuff on your own, it’s not hard to set up a lab environment. I’m
using a Windows 10 virtual machine, running the latest version of Windows PowerShell (and I can’t
stress enough how important it is to make sure you’ve updated to the latest and greatest). I’m also
running a Windows Server 2012 R2 server, also with the latest version of PowerShell. At the time of
this chapter’s writing, that’s PowerShell v5.1 (which you can check by running $PSVersionTable in
a console window). Neither machine belongs to a domain, and for this chapter I’m not even going
to be using the server. I’ll author, compile, and run my configurations entirely locally on the Win10
computer.
1 configuration MyConfig {
2 }
While it’s technically possible to include multiple configuration blocks in a single .ps1 file, I avoid
doing so. For one, Azure DSC doesn’t like it. For another, I like to have my script filenames match
the configuration names - so the above would be in MyConfig.ps1.
Adding Nodes
The next part of the configuration identifies the node or nodes that you plan to create MOFs for. At
the simplest level, it can look like this:
1 configuration MyConfig {
2 node SERVER2 {
3 }
4 }
When I run MyConfig, I’ll get SERVER2.mof as the output. Of course, this isn’t the most efficient
way to create configurations, because you’ll often have multiple computers that need the same
configuration settings.
Basic Configuration Authoring 52
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7 }
8 }
Adding Settings
Within the Node section, you identify the configuration settings that you want to go into the MOF.
Let’s start with something really simple:
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7
8 WindowsFeature Backups {
9 Ensure = 'Present'
10 Name = 'Windows-Backup'
11 }
12
13 }
14 }
• I’ve specified a configuration setting that I’ve named Backups. This is an arbitrary name,
meaning it’s only going to have meaning to me. I could have named it Fred, or Rock, or
whatever.
Basic Configuration Authoring 53
• The setting uses the WindowsFeature DSC resource. This happens to be a native resource on
Windows Server computers that have WMF 4 or later installed, so I won’t need to worry about
deploying the resource.
• Within the setting, I’ve specified two properties. The properties of a setting will vary based on
the DSC resource it uses. This particular DSC resource offers Ensure and Name as properties;
other resources will be different.
• I’ve specified values - Present and Windows-Backup - for my properties. Again, the valid
values will vary depending on the DSC resource that you’re using. I know that, for the
WindowsFeature resource, Ensure will either be Absent or Present, and I know that Name
wants the feature name as shown by running Get-WindowsFeature. The chapter on “Finding
and Using Resources” will explain how I found that out.
At the most basic level, this is all a configuration is: a bunch of settings, in a Node block, in a
Configuration block.
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7
8 $features = @('windows-backup','dns-server','dhcp-server')
9 ForEach ($feature in $features) {
10 WindowsFeature $feature {
11 Ensure = 'Present'
12 Name = $feature
13 }
14 }
15 }
16 }
When this is run, it’ll produce a MOF with three Windows features. Notice that I’ve used $feature
for the name of the setting as well as using it to identify the feature name. That’s because the name
of every setting must be unique within the configuration. This probably isn’t an incredibly practical
example, but it does show you how you can use logic in a configuration.
But here’s the important bit: This logic only executes when the configuration is run on your
authoring machine. The final MOF won’t contain the ForEach loop; the MOF will contain three
Basic Configuration Authoring 54
static settings. MOFs aren’t scripts - they don’t contain logic. MOFs don’t ask questions and make
decisions, for the most part; they just do what they contain.
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7
8 Script MyScript {
9 GetScript {}
10 TestScript {}
11 SetScript {}
12 }
13
14 }
15 }
Within the TestScript script block, you can put whatever you want, provided it returns $True or
$False as its only output. If it returns $False, then the target node’s LCM will run whatever’s in
the SetScript script block. The GetScript script block should return a hash table that describes the
current configuration. GetScript can either return an empty hashtable, or a hashtable with a single
key-value pair. The key must be called “Result”, and the value must be a string that represents the
state of the configuration.
The fun thing here is that these script blocks don’t run when you run the configuration and create
MOFs. Instead, the PowerShell code in those three script blocks goes right into the MOF as-is. When
the MOF is applied to a target node, it’s the LCM on the target node that will run these. So this is a
sort of way to make “dynamic” MOFs. Here’s a quick example:
Basic Configuration Authoring 55
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7
8 Script MyScript {
9 GetScript { write @{} }
10 TestScript { Test-Path 'c:\Program Files\Example Folder' }
11 SetScript { Mkdir 'c:\Program Files\Example Folder' }
12 }
13
14 }
15 }
My GetScript returns an empty hash table; that’s not uncommon. My TestScript will return $True
if a folder exists, and $False if it doesn’t (because that’s what Test-Path itself returns). If the folder
doesn’t exist, my SetScript block creates the folder. Again, a very simple example, but shows you
the idea.
However…
You need to be a little careful with Script resources. They can be harder to maintain and, if they’re
complex, really difficult to troubleshoot. For all but the simplest purposes, I usually prefer creating
my own custom DSC resource. Honestly, doing so requires like four extra lines of code on top of
just writing a Script resource, and you end up with something that’s easier to maintain and debug.
We’ll obviously be covering that in an upcoming chapter.
Documenting Dependencies
I’ve touched on this before, but you can help the LCM understand the order in which settings must
be applied by using the DependsOn property. This is a common property, available for all DSC
resources.
Basic Configuration Authoring 56
1 configuration MyConfig {
2 param(
3 [string[]]$ComputerName
4 )
5
6 node $ComputerName {
7
8 WindowsFeature Backups {
9 Ensure = 'Present'
10 Name = 'Windows-Backup'
11 }
12
13 WindowsFeature DHCP {
14 Ensure = 'Present'
15 Name = 'DHCP-Server'
16 }
17
18 CustomDHCPLoader LoadDHCP {
19 SourceLocation = \\companyroot\admin\dhcp\sources.xml
20 DependsOn = [WindowsFeature]DHCP,[WindowsFeature]Backups
21 }
22
23 }
24 }
In this example, the CustomDHCPLoader setting (which is something I made up; it’s not native to
Windows) won’t run until after the two WindowsFeature settings are confirmed to be correct.
But in addition to any parameters you define, you also get a handful of free parameters that you can
use:
• -InstanceName isn’t something you’ll probably ever use. It lets you override the ID that DSC
uses to manage composite configurations. and it’s honestly best to not mess with it.
Basic Configuration Authoring 57
• -OutputPath accepts a folder path in which your MOF files will be created. By default,
PowerShell will create a folder named after your configuration (/MyConfig, in the example
I’ve used), and put the MOFs in there.
• -ConfigurationData accepts a hash table of configuration data, which you can use to drive
logic decisions within the configuration. This is a big topic, and there’s a whole chapter on it
later.
Wrapping Up
This should give you a basic idea of what configurations look like. You can do a lot more with them,
and we’ll have a whole chapter on more advanced options later in the book. For now, I just wanted
to set the stage for this portion of DSC, so that we can cover some of the other major players.
Going Further with Configurations
To this point, we’ve basically treated configurations and nodes as a 1:1 thing. One configuration
script gets you one MOF, which goes to one node. For some environments, that’ll be totally fine. For
others, you’ll want a bit more modularization and re-use. After all, many nodes will be similar, so
why copy-and-paste all that script code?
Understanding ConfigurationData
One of the first ways that you can have a single configuration script produce multiple, unique MOFs,
is to supply the script with configuration data. This is something that lives apart from the script itself,
and is fed to the script when the script is run. So, just to set the stage broadly, a script file might look
like this:
1 configuration MyMasterConfig {
2 # a bunch of stuff goes here
3 }
4
5 $configData = @{}
6
7 MyMasterConfig -ConfigurationData $configData
I’ve created a configuration named MyMasterConfig. Separately - and this could have been in a
totally different file that I dot-sourced, for example - I created a $configData hash table. I’m not
showing you the structure of that yet, but I will in a bit. Then, I run the config, and pass in the hash
table using a built-in parameter. Now, my configuration data is available “inside” MyMasterConfig.
Let’s start digging into the actual details.
Going Further with Configurations 59
1 $MyData =
2 @{
3 AllNodes =
4 @(
5 @{
6 NodeName = 'NODE1'
7 Role = 'WebServer'
8 },
9
10 @{
11 NodeName = 'SERVER2'
12 Role = 'WordPress'
13 },
14
15 @{
16 NodeName = 'MACHINE3'
17 Role = 'WebServer'
18 }
19 );
20 NonNodeData = ''
21 }
Here, I’ve defined three nodes, named NODE1, SERVER2, and MACHINE3. For each, I’ve defined a
“Role” property. That’s a property I made up; DSC doesn’t care what other properties I provide.
I’ve specified a value for each Role, and again, this is data that’s only meaningful to me. This
configuration data isn’t “connected” to any configuration script at this point. In order to create
that connection, I would run the configuration, and use its built-in -ConfigurationData parameter
to pass in $MyData. You do not need to define -ConfigurationData in a Param() block; it’s a magic
parameter that is automatically available for all configuration scripts.
Going Further with Configurations 60
• $AllNodes automatically gets you the AllNodes@() portion of your ConfigurationData block.
In the next section, I’ll share some of its super powers. One of those powers is the ability to
filter down the nodes - for example, only grabbing nodes that have a certain property set to a
certain value.
• Once you’ve filtered the $AllNodes collection, you can use $Node to refer to just the current
node.
• $ConfigurationData represents the entire block of data.
So let’s see a bit of that in action. Remember that, in a configuration script, you start out with a Node
block. That’s where you can use $AllNodes, and it does a pretty impressive bit of magic. Pretend
that the above three-node data block has been defined in the same script as this example:
1 configuration AllMyServers {
2 Node $AllNodes.Where({$_.Role -eq "WebServer"}).NodeName {
3 }
4 }
5
6 AllMyServers -ConfigurationData $MyData
Microsoft uses this example a lot, but rarely explains the magic (there are also a lot of older examples
that have the wrong syntax). The Node{} construct wants you to provide a list of node names.
That’s it - just computer names, and no other data. So we’re taking $AllNodes, which contains
everything. We’re using its special Where method to grab only those nodes whose Role property is
set to WebServer. Remember that I made up the Role property, right? So I really could have filtered
on any property that I’d put into the data block. Having done that filtering, I’m left with two nodes.
Each node has a NodeName and a Role property - but the Node{} block here in the script only wants
the names. So that’s why the “.NodeName” is there - it takes the filtered nodes and grabs just the
contents of the NodeName property.
Now let’s add a bit more:
Going Further with Configurations 61
1 configuration AllMyServers {
2 import-DSCResource xWebAdministration
3
4 Node $AllNodes.Where({$_.Role -eq "WebServer"}).NodeName {
5 xWebSite TheWeb {
6 Name = $Node.NodeName
7 PhysicalPath = 'c:\inetpub\wwwroot'
8 Ensure = 'Present'
9 }
10 }
11 }
12
13 AllMyServers -ConfigurationData $MyData
Within the Node{} construct, PowerShell will automatically run the contents once for each node
you’ve given it. It’s kind of like an invisible ForEach wrapped around the whole thing. Inside the
Node{} construct, I can use $Node to refer to “the current node.” So on the first pass, $Node will refer
to NODE1. On the second pass, it’ll refer to MACHINE3. Go back and look at my data block if you
need to remind yourself why.
You can see that this is going to produce two MOF files, one named NODE1.mof and another
named MACHINE3.mof, both (by default) in a folder named /AllMyServers. Each MOF will define
a website, which will have the same name as the server itself. That’s actually not realistic, so let’s
revisit my data block and add some stuff to it:
1 $MyData =
2 @{
3 AllNodes =
4 @(
5 @{
6 NodeName = 'NODE1'
7 Role = 'WebServer'
8 Site = 'CustomApp'
9 SitePath = 'c:\inetpub\approot'
10 },
11
12 @{
13 NodeName = 'SERVER2'
14 Role = 'WordPress'
15 },
16
17 @{
18 NodeName = 'MACHINE3'
Going Further with Configurations 62
19 Role = 'WebServer'
20 Site = 'App2'
21 SitePath = 'c:\inetpub\app2root'
22 }
23 );
24 NonNodeData = ''
25 }
Notice that I’ve added Site and SitePath properties only to the nodes that have a WebServer value for
Role. That’s totally legal - you do not need to provide consistent properties for each node. You simply
have to make sure you’re providing whatever properties your configuration script is expecting. So
now I can use those two new properties:
1 configuration AllMyServers {
2 Node $AllNodes.Where({$_.Role -eq "WebServer"}).NodeName {
3 xWebSite TheWeb {
4 Name = $Node.Site
5 PhysicalPath = $Node.SitePath
6 Ensure = 'Present'
7 }
8 }
9 }
10
11 AllMyServers -ConfigurationData $MyData
You can see how the data block allows me to use a single configuration script to produce multiple,
unique MOFs for multiple nodes.
All-Nodes Data
You can also specify properties to apply to all nodes:
1 $MyData =
2 @{
3 AllNodes =
4 @(
5 @{ NodeName = '*'
6 SystemRoot = 'c:\windows\system32'
7 },
8
9 @{
Going Further with Configurations 63
10 NodeName = 'NODE1'
11 Role = 'WebServer'
12 Site = 'CustomApp'
13 SitePath = 'c:\inetpub\approot'
14 },
15
16 @{
17 NodeName = 'SERVER2'
18 Role = 'WordPress'
19 },
20
21 @{
22 NodeName = 'MACHINE3'
23 Role = 'WebServer'
24 Site = 'App2'
25 SitePath = 'c:\inetpub\app2root'
26 }
27 );
28 NonNodeData = ''
29 }
Now, all three of my nodes also have a SystemRoot property. This is exactly the same as if I’d
manually listed SystemRoot for each node individually, but it saves some room. The * is a special
character in this case; don’t think of it as a usual wildcard, because other wildcard expressions aren’t
allowed.
I also mentioned the ForEach() method of $AllNodes. Frankly, I don’t find myself using that for
nodes, because the Node{} construct is already an implicit ForEach. However, there are certainly
times when it can be useful. Imagine this configuration data block:
Going Further with Configurations 64
1 {
2 AllNodes = @(
3 @{
4 NodeName = '*'
5 FilesToCopy = @(
6 @{
7 SourcePath = 'C:\SampleConfig.xml'
8 TargetPath = 'C:\SampleCode\SampleConfig.xml'
9 },
10 @{
11 SourcePath = 'C:\SampleConfig2.xml'
12 TargetPath = 'C:\SampleCode\SampleConfig2.xml'
13 }
14 }
1 configuration CopyStuff {
2
3 Node $AllNodes.NodeName {
4
5 $filenumber = 1
6 ForEach ($file in $Node.FilesToCopy) {
7 File "File$filenumber" {
8 Ensure = 'Present'
9 Type = 'File'
10 SourcePath = $file.SourcePath
11 DestinationPath = $file.TargetPath
12 }
13 $filenumber++
14 }
15
16 }
17
18 }
Except I didn’t actually use $AllNodes.ForEach(), right (grin)? That’s just a plain ForEach loop, but
it does show how it can sometimes be beneficial to do that. Honestly… I really do struggle to find
use cases for $AllNodes.ForEach(). I’ve always found it simpler to filter using Where(), and to just
use a normal ForEach loop - as above - when I need to enumerate something from the data block.
Going Further with Configurations 65
1 $MyData = @{
2 $AllNodes = @(... data goes here ...)
3 $NonNodeData = ''
4 }
5
6 . ./Config1.ps1 # Contains "MyFirstConfig" configuration
7 . ./Config2.ps1 # Contains "MySecondConfig" configuration
8
9 MyFirstConfig -ConfigurationData $MyData
10 MySecondConfig -ConfigurationData $MyData
So I’ve got two configurations, which I’ve put into separate script files for my convenience. I’ve
dot-sourced those into the same script that contains my data block, and then passed the same data
block to each. Presumably, each configuration is filtering the nodes in some fashion, so they’re not
necessarily duplicating each other’s efforts. Obviously, I haven’t completely filled in the $AllNodes
section with actual node properties - I was just trying to lay out a concise framework to illustrate
the approach.
1 configuration UberConfig {
2 Node $AllNodes.Where({ $_.Role -eq 'WebServer' }).NodeName {
3 }
4
5 Node $AllNodes.Where({ $_.Role -eq 'WordPress' }).NodeName {
6 }
7 }
That’s totally legal, and running UberConfig with my $MyData block would produce three MOFs -
one for NODE1, one for SERVER2, and one for MACHINE3.
1 configuration LogicalConfig {
2 Node $AllNodes.NodeName {
3
4 if ($Node.Role -eq "WebServer") {
5 # add settings here
6 }
7
8 if ($Node.Role -eq "WordPress") {
9 # add settings here
10 }
11
12 }
13 }
This gives you a more declarative kind of control over what goes into each MOF, but it can obviously
get harder to read and maintain if you go nuts with it.
Using NonNodeData
There’s a couple of important things to know about this sometimes-misused section. First, you do not
have to call the section “NonNodeData”, and second, you can have multiple NonNodeData sections,
which makes a NonNodeData section a good option for role-specific settings that do not pertain to
all nodes. The snippet below shows two NonNodeData sections, one for domain controller settings
(DCData) and the other for DHCP server settings (DHCPData).
Going Further with Configurations 67
1 $MyData = @{
2 AllNodes = @( ... )
3
4 DCData = @{
5 DomainName = "Test.Pri"
6 DomainDN = "DC=Test,DC=Pri"
7 DCDatabasePath = "C:\NTDS"
8 DCLogPath = "C:\NTDS"
9 SysvolPath = "C:\Sysvol"
10 }
11
12 DHCPData = @{
13 DHCPName = 'DHCP1'
14 DHCPIPStartRange = '192.168.3.200'
15 DHCPIPEndRange = '192.168.3.250'
16 DHCPSubnetMask = '255.255.255.0'
17 DHCPState = 'Active'
18 DHCPAddressFamily = 'IPv4'
19 DHCPLeaseDuration = '00:08:00'
20 DHCPScopeID = '192.168.3.0'
21 DHCPDnsServerIPAddress = '192.168.3.10'
22 DHCPRouter = '192.168.3.1'
23 }
24 }
Keep in mind that you have the special $ConfigurationData variable, and $ConfigurationData.<NonNodeData>
will access the “global” NonNodeData keys. Previously, we’d just set NonNodeData to an empty
string (“”), but in fact it can be a hash table or multiple hash tables. So now, the configuration can
use $ConfigurationData.DCData.DomainName to access the domain name property, or $DHCP-
Data.DHCPDNSServerIPAddress to access the DHCP server’s DNS IP address.
Poor Man’s Configuration
Modularization
Unless you’re in the very smallest of environments, or only using DSC for a very small subset
of nodes, you’re going to end up with more than one configuration script. Most of the time, you’ll
probably use some gut-instinct measurement of the configuration’s difference, or “delta,” from other
configurations to decide when to make a new one.
For example, suppose you have two kinds of application web server. For each kind, you have a half-
dozen actual machines. The two are very similar: they have the same firewall settings, basic IIS
install, and so on. The main difference is perhaps some application pool settings, and the source files
they use. So let’s say they’re about 90% the same, with a 10% delta. You might decide, “hey, 10% ain’t
so much, I’ll just make one configuration, and use some Configuration Data filtering logic stuff to
handle that 10%.”
The percentage of delta that falls within your “comfort level” for having one configuration, versus
two different ones will be very much a personal thing. But at some point, the “delta” between the
two configurations will be large enough that you’ll want to make two separate configuration scripts
- while still modularizing, as much as possible, the common elements of them.
This chapter will present the easiest approach, at least conceptually. The next two chapters, which
cover composite and partial configurations, will present different approaches. All of these have their
own pros and cons, and you’ll simply have to decide which works best for your organization.
Dot Sourcing
Suppose you have a configuration that’s something like the following. Notice that I’m not using real
DSC resources here - this isn’t meant to be an exhaustive syntactic example, but rather a structural
one.
Poor Man’s Configuration Modularization 69
1 configuration MyConfig {
2
3 Node $AllNodes.Where({ $_.ServerType -eq 'AppServer' }) {
4
5 WindowsFeature IIS {
6 Name = 'Web-Server'
7 Ensure = 'Present'
8 }
9
10 SourceFiles AppFiles {
11 Source = '\\master\distribution\appserver'
12 Destination = 'c:\inetpub\approot'
13 SyncMode = 'checksum'
14 }
15
16 }
17
18 Node $AllNodes.Where({ $_.ServerType -eq 'SalesServer' }) {
19
20 WindowsFeature IIS {
21 Name = 'Web-Server'
22 Ensure = 'Present'
23 }
24
25 SourceFiles AppFiles {
26 Source = '\\master\distribution\salesserver'
27 Destination = 'c:\inetpub\salesroot'
28 SyncMode = 'checksum'
29 }
30
31 }
32
33 }
So I’m configuring two kinds of nodes. They both have a common element - the IIS bit - and an
element that differs. Because this is such a simple example, the delta between the two is around 50%
- meaning half the configuration is different. So I could modularize this a bit.
I’ll start by creating a file called ./IISStuff.ps1. I’m going to put it in the same folder as the actual
configuration script, and it will look like this:
Poor Man’s Configuration Modularization 70
1 WindowsFeature IIS {
2 Name = 'Web-Server'
3 Ensure = 'Present'
4 }
That’s not a valid configuration script! The ISE won’t even color-code it or error-check it correctly.
It’s just a fragment. Now, in my main file, which I’ll call .AppServers.ps1, I’ll do this:
1 configuration MyConfig {
2
3 Node $AllNodes.Where({ $_.ServerType -eq 'AppServer' }) {
4
5 . .\IISStuff.ps1
6
7 SourceFiles AppFiles {
8 Source = '\\master\distribution\appserver'
9 Destination = 'c:\inetpub\approot'
10 SyncMode = 'checksum'
11 }
12
13 }
14
15 Node $AllNodes.Where({ $_.ServerType -eq 'SalesServer' }) {
16
17 . .\IISStuff.ps1
18
19 SourceFiles AppFiles {
20 Source = '\\master\distribution\salesserver'
21 Destination = 'c:\inetpub\salesroot'
22 SyncMode = 'checksum'
23 }
24
25 }
26
27 }
I’ve taken a first step at “modularizing” the common code. I’ve moved it into an external file, which
is then dot-sourced into the configurations. Now from here, you could take the approach much
further. You could have an entirely different configuration script file for each node type, each using
IISStuff.ps1 as needed. You could obviously dot-source multiple files containing commonly used
fragments.
Poor Man’s Configuration Modularization 71
Approach Analysis
I’m presenting this approach not because I see it used very often or because I think it’s a great idea.
I’m presenting it mainly because it’s conceptually easy for most PowerShell scripters to understand.
But we need to do some analysis of its pros and cons.
• PRO: Unlike a Composite configuration, dot-sourcing doesn’t require you to deploy anything
to your nodes.
• PRO: Unlike a Partial configuration, you’ll be producing a single MOF-per-node that’s complete
and ready to deploy; you’ll reduce errors like duplicate keys.
• CON: You can’t ever look at the entire configuration “in one place, in front of your face,”
because it’s spread across multiple files. That’s also true with Partial configurations and, to a
point, with Composite configurations.
• CON: Dot-sourcing just feels a little… messy. It’s not as structured as approaches like Composite
configurations or, to a point, Partial configurations.
• CON: You can’t test the individual fragments in a standalone fashion, so you’re not truly
modularizing, here. That’s not the case with either Composite or Partial configurations, which
can be tested independently. Sort of. You certainly could work up “test configurations” that
dot-sourced and tested individual fragments, but it can get messy.
But with this dot-sourcing “modularization” in mind, we can start exploring the more “official” ways
of modularizing DSC configurations.
Composite Configurations
Composite configurations are one way to modularize commonly used groups of configuration
settings. You start by building one or more “sub-configurations.” That isn’t an official term, but
rather one I made up and find useful. A sub-configuration contains that group of commonly used
configuration settings. It can be parameterized, which means it can slightly alter its output based
on the input values you feed it. You then save that sub-configuration in a special way that makes it
look like a DSC resource.
Next, you write the actual composite configuration. This is just a normal configuration that uses
both regular DSC resources, and the looks-like-a-resource sub-configurations.
The official term for sub-resources is a composite resource, because it can take multiple DSC
resources and merge, or “composite”, them together into a single looks-like-a-resource. Technically,
I don’t think Microsoft is using the term “composite configurations” anymore, and “composite
resource” is more accurate. But “composite configuration” has been thrown around since DSC was
invented, so I’m gonna run with it.
I’m going to run through an example that uses code from this link, and add my own explanations
and ideas as I go.
https://msdn.microsoft.com/en-us/powershell/dsc/authoringresourcecomposite
1 Configuration xVirtualMachine
2 {
3 param
4 (
5 # Name of VMs
6 [Parameter(Mandatory)]
7 [ValidateNotNullOrEmpty()]
8 [String[]] $VMName,
9
10 # Name of Switch to create
11 [Parameter(Mandatory)]
12 [ValidateNotNullOrEmpty()]
Composite Configurations 73
13 [String] $SwitchName,
14
15 # Type of Switch to create
16 [Parameter(Mandatory)]
17 [ValidateNotNullOrEmpty()]
18 [String] $SwitchType,
19
20 # Source Path for VHD
21 [Parameter(Mandatory)]
22 [ValidateNotNullOrEmpty()]
23 [String] $VHDParentPath,
24
25 # Destination path for diff VHD
26 [Parameter(Mandatory)]
27 [ValidateNotNullOrEmpty()]
28 [String] $VHDPath,
29
30 # Startup Memory for VM
31 [Parameter(Mandatory)]
32 [ValidateNotNullOrEmpty()]
33 [String] $VMStartupMemory,
34
35 # State of the VM
36 [Parameter(Mandatory)]
37 [ValidateNotNullOrEmpty()]
38 [String] $VMState
39 )
40
41 # Import the module that defines custom resources
42 Import-DscResource -Module xComputerManagement,xHyper-V
43
44 # Install the Hyper-V role
45 WindowsFeature HyperV
46 {
47 Ensure = "Present"
48 Name = "Hyper-V"
49 }
50
51 # Create the virtual switch
52 xVMSwitch $SwitchName
53 {
54 Ensure = "Present"
55 Name = $SwitchName
Composite Configurations 74
56 Type = $SwitchType
57 DependsOn = "[WindowsFeature]HyperV"
58 }
59
60 # Check for Parent VHD file
61 File ParentVHDFile
62 {
63 Ensure = "Present"
64 DestinationPath = $VHDParentPath
65 Type = "File"
66 DependsOn = "[WindowsFeature]HyperV"
67 }
68
69 # Check the destination VHD folder
70 File VHDFolder
71 {
72 Ensure = "Present"
73 DestinationPath = $VHDPath
74 Type = "Directory"
75 DependsOn = "[File]ParentVHDFile"
76 }
77
78 # Create VM specific diff VHD
79 foreach ($Name in $VMName)
80 {
81 xVHD "VHD$Name"
82 {
83 Ensure = "Present"
84 Name = $Name
85 Path = $VHDPath
86 ParentPath = $VHDParentPath
87 DependsOn = @("[WindowsFeature]HyperV",
88 "[File]VHDFolder")
89 }
90 }
91
92 # Create VM using the above VHD
93 foreach($Name in $VMName)
94 {
95 xVMHyperV "VMachine$Name"
96 {
97 Ensure = "Present"
98 Name = $Name
Composite Configurations 75
Notice the huge Param() block right at the beginning. It wants the name of a virtual machine (VM),
the name of a virtual switch, the switch type, a couple of paths, startup memory size, and the state
of a VM. Basically, all the core things you need to define a new virtual machine. You’ll notice those
parameters - $SwitchName, $Name, and so on - being used throughout the rest of the configuration.
1 RootModule = 'MyVMMMaker.schema.psm1'
1 New-ModuleManifest `
2 -Path \Program Files\WindowsPowerShell\modules\MyVMStuff\MyVMStuff.psd1
1 configuration RenameVM
2 {
3
4 Import-DscResource -Module MyVMStuff
5 Node SERVER1
6 {
7 MyVMMaker NewVM
8 {
9 VMName = "Test"
10 SwitchName = "Internal"
11 SwitchType = "Internal"
12 VhdParentPath = "C:\Demo\VHD\RTM.vhd"
13 VHDPath = "C:\Demo\VHD"
14 VMStartupMemory = 1024MB
15 VMState = "Running"
16 }
17 }
18
19 }
You can see where I’ve explicitly imported the DSC resource module MyVMStuff, and referred to
the MyVMMaker resource, passing in all the settings defined in the composite resource’s Param()
block.
Both the ZIP file and the checksum file would go on the pull server, in its designated Resources path.
If you plan to manually deploy the resource, you don’t need to do any of this - you can just get the
correct folder hierarchy, and the files, in place in whatever fashion you desire.
Composite Configurations 77
Approach Analysis
As I wrote in the previous chapter, every modularization approach has pros and cons.
• CON: Like any other DSC resource you use that doesn’t ship with the operating system, you
have to deploy your composite resource. That can mean ZIPping it up, creating a checksum for
it, and deploying it to a pull server. Or, it might mean manually deploying it to nodes, if you’re
not using a pull server.
• PRO: The “contents” of the composite resource are distinct from the configuration that calls
it. So things like duplicate keys aren’t a problem in the way that they can be for a Partial
configuration.
• PRO: It’s a little easier to see your entire configuration “in one place,” because while the
composite resource itself is separate, your “main” configuration shows what the composite
resource is doing. That is, in my example, you can see that it’s creating a VM, what the VM
name is, and so on. Now, it’s certainly possible to have a composite resource with a bunch
of hardcoded information that won’t be “seen” in the main configuration - but that’s your
decision to take that approach.
• PRO: Composite configurations are very structured.
• CON: There are some limitations on using DependsOn within composite resources. From the
main configuration, any setting can have a DependsOn for any other setting in the main
configuration. But you can’t take a DependsOn to an individual setting within the composite
resource. If you’re designing composite resources well, you can minimize this as a problem.
Simply keep each composite resource as “atomic” as possible, meaning whatever it’s doing
should be tightly coupled and not designed to be separated.
• PRO: It’s relatively easy to test a composite resource, by simply building a “main” configuration
that uses the composite resource and nothing else. You can also just “run” the composite
resource standalone, because, after all, it’s just a configuration script. Supply whatever
parameters it wants, produce a MOF, and deploy that MOF to a test machine to validate it.
Design Considerations
As I noted above, the key to designing composite resources is to have them do one thing, and
one thing only. Whatever’s happening inside any given resource should all be intended to happen
together, or not at all. For example, Microsoft’s sample - which I used - creates a VM. You’d never
create a VM without assigning memory, setting up a virtual disk, and so on, and so everything in
that resource goes together.
You might have a set of security configurations - firewall stuff, maybe some anti-malware packages,
and Windows Update settings - that you always want applied as a collection to your nodes. That’s
fine - that might make a good resource. What you want to avoid, however, are monolithic composite
resources that perform several unrelated tasks, like making a machine a domain controller and
Composite Configurations 78
installing DHCP and installing something else. Those might all be common elements of your
organization’s “infrastructure servers,” and each DC and DHCP server might be configured exactly
the same, but here’s how I’d handle it:
The idea is to keep each composite resource as tightly scoped as possible, and let the “main”
configuration bring them together into a complete node config.
Partial Configurations
Honestly, we’ve already covered almost everything you need to know about partial configurations.
In the chapter on Configuring the LCM, you saw how to tell the LCM to pull partials from a pull
server, or to accept them as a push, or a mix of the two. You saw how to set up dependencies to
control the order of partial configuration application. There’s not a whole lot more to cover, really.
The actual creation of a partial MOF is the same as creating any ordinary configuration; it’s only a
“partial” because a node is being told to use more than one of them. So, in this chapter, we’ll kind
of summarize the capabilities and cover the last couple of details.
The code samples in this chapter aren’t meant to be real-world, runnable code. Partials can get a
little complex, and so we’ve taken an oversimplified approach here so that we can get the point
across.
• Partials allow you to modularize chunks of your configuration, making it easier to have
multiple contributors.
• Partials get combined and evaluated by the LCM, which means it’s very difficult to detect
potential conflicts ahead of time. This can make troubleshooting harder.
• Partials only work with v5 nodes, and if a Pull server is involved, it must also be v5.
1 ...
2 PartialConfiguration Bob {
3 RefreshMode = "Pull"
4 ConfigurationSource = @("[ConfigurationRepositoryWeb]MyPullServer")
5 Description = "Something"
6 ExclusiveResources = @("Script","WindowsFeature")
7 }
This would allow only the Bob partial configuration to use the Script or WindowsFeature resources.
It’s important not to treat this as a security thing. It’s more of a “help our team not do the wrong
thing, but don’t stop us.” Keep in mind that this “rejecting” of “unallowed” resources happens at
the LCM, and the LCM’s configuration can in fact be changed. Also keep in mind that, when
you’ve got multiple people working on partial configurations and not talking to each other, you’re
basically just asking for trouble. Some kind of hard-to-troubleshoot conflict is eventually going to
enter the situation. Partials shouldn’t be used to rectify DSC’s lack of native tooling, and this whole
“authoritative resources” thing is essentially trying to do just that.
Also note that class-based resources can’t be used as ExclusiveResources before WMF5.1.
changed only rarely, while others (being pulled) may be updated more often. There’s a good look at
how to configure the LCM this way on PowerShellMagazine.com⁶.
1 PartialConfiguration Bob {
2 RefreshMode = "Pull"
3 ConfigurationSource = @("[ConfigurationRepositoryWeb]MyPullServer")
4 Description = "Something"
5 ExclusiveResources = @("Script","WindowsFeature")
6 }
In this case, the LCM will ask the pull server for a configuration named “Bob.” The pull server - again,
in WMF 5.0 - would look on disk for “Bob.mof” and “Bob.checksum.mof.” The problem is that the pull
server in Azure Automation stored files differently. It would use ConfigurationName.NodeName.mof
instead. Chucking that nodename in there meant that the LCM couldn’t request partial configura-
tions from Azure Automation! So in WMF 5.1, the protocol was changed slightly, and either file
naming convention is acceptable. Microsoft provides this example of an LCM configuration which
uses both a local pull server and Azure Automation:
1 [DscLocalConfigurationManager()]
2 Configuration RegistrationMetaConfig
3 {
4 Settings
5 {
6 RefreshFrequencyMins = 30
7 RefreshMode = "PULL"
8 }
9
10 ConfigurationRepositoryWeb web
11 {
12 ServerURL = $endPoint
13 RegistrationKey = $registrationKey
14 ConfigurationNames = $configurationName
15 }
16
17 # Partial configuration managed by Azure Automation service.
18 PartialConfiguration PartialConfigurationManagedByAzureAutomation
⁶http://www.powershellmagazine.com/2015/02/20/mixed-refresh-mode-support-for-dsc-partial-configurations/
Partial Configurations 82
19 {
20 ConfigurationSource = "[ConfigurationRepositoryWeb]Web"
21 }
22
23 # This partial configuration is managed locally.
24 PartialConfiguration OnPremisesConfig
25 {
26 RefreshMode = "PUSH"
27 ExclusiveResources = @("Script")
28 }
29
30 }
Deploying MOFs to a Pull Server
Once you’ve created a MOF, you can obviously deploy it in Push mode by running Start-DscConfiguration
- we’ve covered that. But you may also wish to deploy it to either an HTTP(S) or an SMB Pull
server. That’s straightforward: when you configured your pull server, you specified the folder in
which configurations would live (see, “Setting Up a Pull Server” in this book). Simply copy the
configuration MOF files to that location. You’ll also need to create a checksum file for each MOF;
the New-DscChecksum command can do that for you.
MOF naming depends on how you’ve configured your LCMs. If you’re using ConfigurationIds
(which is the the only option in v4), then each MOF needs to have the GUID - that is, the
Configuration ID - that you’ve assigned the node (or nodes) that will pull the configuration.
Checksum files have the same filename, with the addition of a “.mof.checksum” filename extension.
However, if you’re using Configuration Names in v5 or later, then MOF filenames take the
configurations’ names - like “Bob.mof,” with “Bob.mof.checksum” being the checksum.
Part 5: Using and Authoring
Resources
To this point, we’ve tried to stick with the simple, built-in resources that Microsoft provides.
Eventually, you’ll need to figure out what all of those are and how to use them, so we’ll cover
that next. And eventually, you’ll probably run into some situation where the resource you need
doesn’t exist, so we’ll also cover how to create your own.
Finding and Using Resources
At this point in the book, I feel that we’ve gotten all the basics out of the way. You should know
who the major players are in the DSC process, and you should have all the core terminology in your
head. So now we can get serious.
I think the first thing people tend to ask about DSC is, “What can I do with it?” Legit question. And
although I wanted to make sure you knew about configurations, MOFs, the LCM, and all that, for
me, “What can I do with it?” is the right starting point for DSC. Figuring out what DSC resources
are available, and figuring out how to use them, is pretty much where DSC really begins.
One major upside to DSC is that so much is available as open source projects. Nearly everything
Microsoft has created for DSC is open source. Tons of open source projects from community groups
and enthusiasts are out there. And many of those projects have dozens (or more!) of contributors, all
improving and debugging everything, all the time. Things move and improve fast - which is great.
But the corresponding downside is that documentation can be slim or nonexistent. So figuring out
how to use what’s out there - that’s critical.
⁷http://powershellgallery.com
Finding and Using Resources 86
I’ve truncated those results, by the way - you can run the command yourself to see them all.
Note - You may still see online references to the “DSC Resource Kit.” Some of these references may
include downloads that existed before Microsoft moved to PowerShell Gallery; you should download
resources from the Gallery and not from DSC Resource Kit download pages to ensure you’re getting
the latest and greatest resources.
Many of the DSC resource authors in the community publish their modules to PowerShell Gallery.
If you happen to run across their source code repositories - often GitHub - you should use what’s
in Gallery rather than downloading the code directly. Often times, what’s in Gallery represents a
“stable” release, while what’s in the source code repo may be a “development branch.”
And aside from resources authored by Microsoft (the full output of Find-Module can tell you that),
you should treat any code in Gallery as untrusted. That is, you should review it and be comfortable
with it before deploying it in your production environment.
Not everyone publishes to the Gallery, though, and so Google remains one of your best friends when
it comes to finding new DSC resources.
right now, we’re just getting the module onto your authoring machine. We’ll worry about deploying
it to other nodes later.
If the module you need isn’t in a Gallery that Install-Module can get to, then you’re likely just
downloading a ZIP file. That’s fine. Unzip it to your /Program Files/WindowsPowerShell/Modules
folder. Now, be a little careful, here. Some folks will ZIP up that entire path, meaning you’ll
end up with /Program Files/WindowsPowerShell/Modules/Program Files/WindowsPowerShell/-
Modules/WhateverTheModuleNameIs. That won’t work. Supposing the module is named Fred (for
some reason), you ultimately want /Program Files/WindowsPowerShell/Modules/Fred. Inside the
/Fred folder, all of the module’s files should exist, and that may include one or more subfolders, too.
As you can see, I have several resources - Archive, Environment, and so on - in a resource module
called PSDesiredStateConfiguration, and its version is 1.1. After running Get-DscResource, you
should see any resource modules you’ve installed; if you don’t, get that sorted out before you go
any further.
The main problem I’ve seen is simply having the resource module in the wrong location. Have a
look at the PSModulePath environment variable on your computer - it should include the /Program
Files/WindowsPowerShell/Modules folder as one of its locations. If it doesn’t, fix it (which needs to
be done outside PowerShell, like from the System Properties Control Panel applet). Or, locate your
modules in one of the locations that PSModulePath does contain (ideally not the System32 location;
that’s meant for Microsoft only).
Finding and Using Resources 88
1 ResourceName UniqueName {
2 Property = Value
3 Property = Value
4 }
The ResourceName is easy to figure out, because it’s shown right in the output of Get-DscResource.
But what properties does the resource support? What values do they accept? Hopefully there’s
some documentation out there that can clue you in! Microsoft has gotten better about publishing
documentation for a lot of their resources, although as I write this they’re still working on adding
more docs. But don’t rely on the Get-Help command in PowerShell - at this time, it’s not wired up
to provide help on DSC resource modules, unfortunately. Well, not directly.
Have a look at C:/Windows/System32/WindowsPowerShell/v1.0/Modules/PSDesiredStateConfiguration/en-
US on your computer. You should see a whole bunch of help files. Some are for commands like
Disable-DscDebug, and you can use the ordinary Help command to view that documentation. Some
of the rest of it is a little harder, because you can’t just run “Help PSDesiredStateConfiguration” and
get that documentation. What you can run, however, is this:
1 PS C:\Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\en\
2 -US> help -path .\PSDesiredStateConfiguration.psm1-help.xml
That is, by using the -Path parameter and providing the path of the XML help file, you can read
the documentation right in the shell. So that’s definitely a starting point. Now, let’s use that to get
resource-specific. Keeping with the same PSDesiredStateConfiguration module, if I look at its root
directory (that is, a level up from en-US), I see these entries (amongst others):
The resource module can contain one or more resources, and each actual resource is in the
DSCResources subfolder:
Finding and Using Resources 89
Those are not the actual resource names that you’d use inside a configuration script, but they’re
obviously similar. Here’s what’s in MSFT_GroupResource:
The .psm1 file is the actual code that makes the resource work, while the .schema.mof file tells DSC
how to use the resource. The en-US folder…
Er, doesn’t contain an XML help file. It looks like it contains English versions of whatever text strings
the resource uses. In theory, other folders could provide translations. So, no help there. Only… there
is.
Finding and Using Resources 90
1 PS C:\Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\DS\
2 CResources\MSFT_GroupResource> Get-Content .\MSFT_GroupResource.schema.mof
3
4 [ClassVersion("1.0.0"), FriendlyName("Group")]
5 class MSFT_GroupResource : OMI_BaseResource
6 {
7 [Key] string GroupName;
8 [write,ValueMap{"Present", "Absent"},Values{"Present", "Absent"}] string Ensure;
9 [write] string Description;
10 [write] string Members[];
11 [write] string MembersToInclude[];
12 [write] string MembersToExclude[];
13 [write,EmbeddedInstance("MSFT_Credential")] string Credential;
14 };
That’s what’s in the .schema.mof file. As I wrote, this tells DSC how to use the resource, and
that includes listing the properties that the resource requires. The resource’s name - it’s “friendly
name,” which you’d use in a configuration script - is “Group.” It accepts several properties, including
GroupName, Ensure, Description, Members, and so on. The Ensure property accepts “Present” or
“Absent” as legal values. The GroupName property is the key, meaning each instance of this resource
that you use within a configuration script must have a unique GroupName.
WAIT! Does that mean you can only reference a given user group one time per configuration? Yup.
That restriction prevents two different settings within the configuration from “fighting” with each
other. This is the thing I mentioned being troublesome for partial configurations, because you could
easily send two partials that each attempted to configure the same group. Once the target node LCM
merged those partials, it’d realize there was a duplicate key, and fail the entire configuration.
But actually… there’s no need to dig into the schema MOF. I mean, feel free, and it’s always good to
know what’s going on under the hood, but PowerShell will parse this for you a bit.
See? Same info, only here you also see the “magic” properties that PowerShell adds, like PsD-
scRunAsCredential. The Group resource predates the existence of that, which is why the resource
provides its own Credential property.
OK - now you know what properties exist, and maybe some of the values they take. But what about
the rest? What about Members? What format should that be in?
Well… er… the thing is, Microsoft is really developing resources at a fast clip. They’re very “agile”
these days, which translates to “lots of code, not so much documentation.” In Microsoft’s specific
case, they’re trying hard, though. Browsing their GitHub repo at https://github.com/PowerShell/PowerShell-
Docs/blob/staging/dsc/groupResource.md, I found some documentation and examples. And so… you
kinda hope that, for whatever resource you’re using, whoever wrote it has done something similar.
Look, let’s just be honest about something for a minute. We’re on the bleeding edge of computing
with this DSC stuff. Things are in constant flux, and in a continuous state of improvement. If you’re
going to play in this space, you’re going to have to accept that it’s a bit more like being an open
source coder, and a lot less like being a Wizard-clicking administrator. Expect to hunt for docs in
GitHub. Expect to “figure things out” on your own, or by asking for help on community sites. If
you’re cool with that, you’ll have a lot of fun with DSC. If you’re not so cool with that… well… it’s
kinda gonna suck a little. We’re in the hack-y side of Microsoft now, the side that’s embraced open
source and agile development. This isn’t like Windows 95, where everything you needed came in
a box or floated down through Windows Update. Hopefully, you’re excited to be in the brave new
world of Microsoft.
Custom Resources
The time will come - probably quickly - when you need to do something with DSC that isn’t
supported by a resource that Microsoft or someone else has conveniently provided you. That’s when
you’ll need to write a custom resource.
always used snippets, for example, to start my function-based resources, and so I didn’t care much
about being “forced” to use the right naming conventions. My DSC resources (as you’ll see) are
extremely simplistic from a code perspective, and so classes just didn’t seem to offer an advantage
to me. But, whatever. I’m easygoing, and if you want to use classes, have fun.
But there’s a downside in the current iteration of WMF v5, and it’s a downside specifically for
classes. You see, for a function-based resource, there’s a special file naming convention when you
ZIP up the resource for distribution to nodes. That convention lets the ZIP filename include the
module’s version number. Due to that convention, you can actually store multiple versions of the
same module on the same pull server. Configuration scripts can specify which version of a module
they want, and the nodes will grab the right version from the pull server. It’s really nifty - it means
you can deploy a revised module to a small number of nodes for testing, and then gradually roll
it out simply by changing the configuration MOF that your nodes are pulling. Classes, however,
don’t support that versioning in 5.0. It wasn’t so much an oversight on Microsoft’s part as it was the
company’s desire to “figure out versioning” in a much broader sense. The good news is that proper
versioning for side-by-side deployment of classes is supported in 5.1. It’s a good thing, too, because
doing any kind of continuous integration with an automated build pipeline would be impossible
without versioning. Now with this change in 5.1, I can roll out a new version on a schedule, like I
can with function-based resources.
1 Function Test-FileContent {
2 [CmdletBinding()]
3 Param(
4 [Parameter(Mandatory=$True)]
5 [string]$Path,
6
7 [Parameter(Mandatory=$True)]
8 [string]$DesiredContent
9 )
10
11 if (Test-Path $Path) {
12 # file exists; get it as a single string
13 $ExistingContent = Get-Content -Path $Path | Out-String
14
15 # compare
16 if ($ExistingContent -ceq $DesiredContent) {
17 Write $true
18 } else {
19 Write $false
20 }
21
22 } else {
23 # file does not exist
24 Write $false
25 }
26
27 }
28
29 Function Set-FileContent {
30 [CmdletBinding()]
31 Param(
32 [Parameter(Mandatory=$True)]
33 [string]$Path,
34
35 [Parameter(Mandatory=$True)]
36 [string]$DesiredContent
37 )
38
39 Set-Content -Path $Path -Value $DesiredContent
40
41 }
So, as you can see, I have a “Test” function that’s comparing a file’s contents to some desired set
Custom Resources 95
of content (yes, I realize the native File resource can do this, but the idea here is to make a simple
example). I also have a “Set” function - and no “Get” function. I’m doing this deliberately to make
a point, which will be explored in more detail in the “best practices” chapter coming up. Briefly,
though, here’s my philosophy: I want my actual resources to contain as little code as possible, because
resources are harder to test and troubleshoot on their own. Ideally, I think of resources as just running
one or two commands, and maybe applying a tiny bit of logic. Anything more complex than that,
I separate into a “functional” module, which is just a normal PowerShell script module that I can
code, test, and troubleshoot on its own.
In this case, I didn’t include a Get-FileContent command because, frankly, it’d just be a thin wrapper
around Get-Content. I know Get-Content already works, so I don’t need to put a wrapper around it.
In fact, my Set-FileContent function is illustrating that point, because it’s nothing but a thin, thin,
thin wrapper around Set-Content. Really, there’s no need for my Set-FileContent - it isn’t doing
anything more than running a command that’s already known to work. I included it here so that
you could see how minimal it is.
I would perhaps put this script module in my usual Modules folder. That way, when the LCM needs
it, PowerShell will be able to find it. However, it’s also possible to deploy the script module along
with the resource module. I’ll discuss that later in this chapter.
1 function Get-TargetResource {
2 [CmdletBinding()]
3 [OutputType([System.Collections.Hashtable])]
4 Param(
5 [Parameter(Mandatory=$True)]
6 [string]$Path
7 )
8
9 If (Test-Path $Path) {
10 $content = Get-Content -Path $path | Out-String
11 } else {
12 $content = $null
13 }
14
15 $output = @{
16 Path = $Path
17 Content = $content
Custom Resources 96
18 }
19 Write $output
20 }
21
22 function Test-TargetResource {
23 [CmdletBinding()]
24 [OutputType([System.Boolean])]
25 Param(
26 [Parameter(Mandatory=$True)]
27 [string]$Path,
28
29 [Parameter(Mandatory=$True)]
30 [string]$DesiredContent
31 )
32
33 Write Test-FileContent -Path $Path -DesiredContent $DesiredContent
34
35 }
36
37 function Set-TargetResource {
38
39 [CmdletBinding()]
40 Param(
41 [Parameter(Mandatory=$True)]
42 [string]$Path,
43
44 [Parameter(Mandatory=$True)]
45 [string]$DesiredContent
46 )
47
48 Set-FileContent -Path $Path -Content $Content
49
50 }
Notice that my script module includes three functions, which have specific names I must use: Test-,
Get-, and Set-TargetResource. The LCM only knows to call these names, and you must use those
names, although the functions can appear in any order within your module.
In the Get- function, notice that I’m returning a hashtable that contains whatever configuration
currently exists. You provide the path, and I’ll get you the file content. This is used when you run
Get-DscConfiguration.
Next is the Test- function, which uses the function I created earlier and returns either True or False.
This is used when you run either Start-DscConfiguration or Test-DscConfiguration, and of course
Custom Resources 97
when running a consistency check. If the LCM is configured to autocorrect, then Test- returning a
False will cause the LCM to immediately run Set-.
Set- illustrated how pointless my Set-FileContent function is, since it isn’t any more complex than
just running the underlying Set-Content directly.
There’s an important procedural point, here: if the LCM runs Test, and if Test returns False, then
the LCM may (if configured) run Set. The LCM assumes the Set to have been successful if the Set
function does not throw an error. That is, the LCM doesn’t run Set, and then run Test again to verify.
It simply runs Set. So it’s important to write a good Set function, one that can return errors if it fails.
In this case, I’m not trapping any errors, so if one occurs - like a permission denied, or “Path does
not exist,” or something, then that error will be thrown, and the LCM will “see” it and know that the
Set didn’t work.
So this is now a minimally functional DSC resource.
• /Program Files/WindowsPowerShell/Modules/MyDSCStuff/DSCResources/AppUser/AppUser.psd1
and AppUser.psm1 are the function-based resource AppUser. You also need AppUser.schema.mof,
which we’ll come to in a moment.
The .psd1 files are always a good idea to include, because they can speed up certain background
operations PowerShell has to perform, especially when the .psd1 file explicitly exports the three -
TargetResource functions. This isn’t as much a huge deal as it is for normal PowerShell modules,
but having a manifest - the .psd1 file - is a good habit to be in.
The .schema.mof file is essential for each resource. And… um, you kind of have to build it by hand.
Normally. Fortunately, Microsoft has a Resource Designer that can actually make the schema MOF
for you, in addition to setting up all the right files and folders for your resource. Check out https:
//msdn.microsoft.com/en-us/powershell/dsc/authoringresourcemofdesigner. It’s a set of commands
that can build out the necessary files and folders, as well as the right function names, all for you.
Now… I mentioned an optional exception for dealing with your “functional” module, remember?
So far, I’ve told you that your functional module needs to be a normal PowerShell script module.
It lives apart from the DSC resource, meaning you’ll have to distribute it independently. That
might be fine with you - if you’re smart, and you already have a good module-deployment
mechanism in place (like a private NuGet repository or PowerShell Gallery instance), then it’s
no big deal. But you can also include the functional module with the DSC resource module,
since we’re using function-based resources. Following the above folder structure example, /Pro-
gram Files/WindowsPowerShell/Modules/MyDSCStuff/MyDSCStuff.psm1 would be the filename
for your “functional” module. MyDSCStuff.psd1 should define that .psm1 as the root module, and
should export whatever functions the module contains. With this approach, the functional module
will still be an operationally independent entity, but it’ll be “packaged” with the DSC resources.
So… with the right folder structure in place, just ZIP it all up. You should be creating MyDSCStuff.zip
(in this example), and when you unZIP it, it should recreate the correct folder structure. And
finally, change the filename to include a version number, such as MyDSCStuff_1.2.zip. Then, create
a checksum file using New-DscChecksum, so that you also have MyDSCStuff_1.2.zip.checksum.
Those two files go to your pull server’s Resources folder, and your nodes will be able to find them
there.
Triggering a Reboot
All resources run within a PowerShell scope created by the LCM. When it spins up each new scope,
the LCM creates a global variable named $global:DSCMachineStatus. Setting this variable to 1,
from anywhere in your resource’s Set code, will tell the LCM that a reboot is needed. The LCM
configuration’s RebootNodeIfNeeded setting will determine if the node is rebooted immediately or
not.
It’s important to understand that this behavior only works within your Set code - a Get or Test cannot
trigger a reboot. Further, your Set must complete without error (which is how the LCM knows it
was successful).
Class-Based Custom Resources
Class-based resources were introduced in WMF5. They offer a few core differences over function-
based resources:
If you’ve put most of your code into a standalone script module, as I recommended in the previous
chapter and will elaborate on in the next chapter, then switching from a function-based resource to
a class-based resource is easy. In fact, all I’m going to do in this chapter is “translate” what I did in
the previous chapter.
However, there’s a downside to class-based modules in WMF 5.0, which is that they don’t support
filename-based versioning. This makes them a little problematic in real-world applications, because
you can’t have two versions of a class-based module living side-by-side on nodes or on a pull server.
Filename-based versioning is supported in WMF 5.1.
1 [DscResource()]
2 Class FileContent {
3 [DscProperty(Key)]
4 [string]$Path
5
6 [DscProperty(Mandatory)]
7 [string]$DesiredContent
8
9 [FileContent] Get() {
10 If (Test-Path $Path) {
11 $content = Get-Content -Path $path | Out-String
12 } else {
Class-Based Custom Resources 100
13 $content = $null
14 }
15
16 $this.DesiredContent = $content
17 $this.Path = $Path
18 return $this
19
20 }
21
22 [void] Set() {
23 Set-Content -Path $this.Path -Content $this.DesiredContent
24 }
25
26 [bool] Test() {
27 return Test-FileContent -Path $this.path -DesiredContent $this.DesiredContent
28 }
29 }
The Get method, replacing Get-TargetResource, is the one to look at. Notice that, above it, we’re
defining the parameters for this resource. They don’t go into a Param() block, and I have to define
the key property explicitly (which was done in the schema MOF of the function-based resource,
rather than in the function itself). Rather than constructing a hashtable to output, I use the special
$this object. $this has properties which match the DscProperty items defined at the top of the class
- so, in this case, it has Path and DesiredContent properties. I set those, and then return the $this
object. The Get() method is declared as returning an object of the type FileContent, which is the
name of the class I created.
The Set() method returns nothing (the [void] part), and accesses the $this object to get the
input parameters. Here, I just went with the Set-Content command rather than my pointless Set-
FileContent “wrapper” function.
The Test() method returns $True or $False (that is, Boolean, or [bool]), and is simply running the
Test-FileContent function I wrote previously. Again, it uses $this to access the input parameters.
As you can see, if you’re keeping your resources simple in terms of functionality, and relying on
functionality written elsewhere (either in PowerShell commands or functions that you’ve written in
a normal script module), then deciding on a class-based or function-based resource is easy. They’re
very similar, once you really strip down all the code they contain.
/Modules/MyModuleName/MyModuleName.psd1
There’s no “DSCResources” subfolder hierarchy, in this case. That means you can’t distribute
your separate, “functional” script module in quite the same way. In this case, you’d simply
drop it into the same folder, perhaps naming it something like MyModuleName-Support.psm1 or
something. Your class-based module (MyModuleName.psm1) would need to load it explicitly, or
your manifest (the .psd1 file) would need to specify it as an additional module to load along with
the MyModuleName.psm1 root module.
Best Practices for Resource Design
This chapter is intended to put some context around the previous two. While they focused on how
to write resources, this chapter will focus on what to put in a resource. As you’ve seen, the physical
act of creating a resource ain’t hard. What’s hard is deciding what code to put into it.
Now, I should point out that Microsoft’s resources aren’t necessarily a universally great example
of good resources. Many of the original Microsoft “DSC Resource Kit” resources were cranked out
with great rapidity, before “practices” really came into question. And while the practices I’m going
to present here are the ones I’ve developed, I can toss out a litany of names: Murawski. Helmick.
Snover. Gates. Clinton. Eastwood. All of those names have either endorsed my design ideas, or have
no idea whatsoever that this stuff even exists. Heh.
For Example
Suppose you have a Sales Order Entry application, and you want to instrument it for DSC
management. The first thing to do is start thinking about your nouns. What are the elements
of the system that you will need to administer? Perhaps you have users which will need to be
Best Practices for Resource Design 104
maintained. Perhaps there are servers that need to be configured, with each server having multiple
configuration elements. These nouns, these “things you configure,” become the nouns of your
commands. Now, what do you do with them? You add users. You delete users. You set the properties
of a user. Right there, you’ve got three commands: New-AppUser, Remove-AppUser, Set-AppUser.
So you write those commands. You test them.
When the time comes to use all of that in a DSC resource, it should be easy. Your “AppUser” resource,
for example, simply needs to implement its Test/Set/Get functions or methods, and in those it simply
calls the commands you’ve already written. The resource isn’t doing anything novel, it’s just reusing
commands that you’ve already written and tested. This is exactly how Microsoft actually approaches
most of their resources. The ADUser resource doesn’t contain code to talk to Active Directory; it
just calls New-ADUser or Remove-ADUser or Set-ADUser, commands which already existed.
Then, your DSC configurations can simply start by ensuring all dependent modules are installed
from that repository. Anything relying on a module should contain a DependsOn, to make sure
it runs after the dependent module is installed. This isn’t difficult, and it’s a more structured and
reliable way to run your infrastructure.
The Script Resource
One of the native resources in DSC is the Script resource. This is intended as a sort of catch-all,
quick-and-dirty way of writing your own custom resource, without all the bother of building out
a class- or function-based resource. It’s worth noting that Script resources can be a great way of
prototyping an idea, but they’re terribly hard to maintain over the long haul. The strong preference
is to use class- or function-based resources rather than Script resources. But… there are a couple of
neat things you can do with them.
The Basics
Take a look at this example (for your convenience, this is available in
https://github.com/concentrateddon/TheDSCBookCode ):
1 Configuration TestUsing {
2
3 param (
4 [Parameter()]
5 [ValidateNotNull()]
6 [PSCredential]$Credential = (Get-Credential -Credential Administrator)
7 )
8
9 node $AllNodes.Where({$_.Role -eq 'DC'}).NodeName {
10
11 $DomainCredential = New-Object `
12 -TypeName System.Management.Automation.PSCredential `
13 -ArgumentList ("$($node.DomainName)\$($Credential.UserName)", `
14 $Credential.Password)
15
16 script CreateNewGpo {
17 Credential = $DomainCredential
18 TestScript = {
19 if ((get-gpo -name "Test GPO" `
20 -domain $Using:Node.DomainName `
21 -ErrorAction SilentlyContinue) -eq $Null)
22 {
23 return $False
24 } else {
The Script Resource 107
25 return $True
26 }
27 } #Test
28
29 SetScript = {
30 new-gpo -name "Test GPO" -Domain $Using:Node.DomainName
31 } #Set
32
33 GetScript = {
34 $GPO= (get-gpo -name "Test GPO" -Domain $Using:Node.DomainName)
35 return @{Result = $($GPO.DisplayName)}
36
37 } #Get
38 } #Script
39 } #Node
40 } #Configuration
41
42 TestUsing -OutputPath "C:\Using Example" -ConfigurationData ".\mj.psd1"
That’ll create a simple MOF that uses a Script resource. Note that the Script setting has three
properties: a GetScript, a TestScript, and a SetScript. These don’t accept any input parameters, and
they’re run by the LCM just like the Get/Test/Set portions of a custom resource. The TestScript is
expected to return $True or $False; the GetScript is expected to return a hash table with “Result” as
the Key and a string value as the Value, and the SetScript is expected to set the desired configuration.
Cool Tricks
Notice the $Using trick in the example shown above? To make sense of that, let’s first look at the
ConfigurationData we fed to the configuration script. It’s included in a separate file, and looks like
this:
1 @{
2 AllNodes = @(
3 @{
4 NodeName = '*'
5
6 # Common networking
7 InterfaceAlias = 'Ethernet'
8 DefaultGateway = '192.168.2.1'
9 SubnetMask = 24
10 AddressFamily = 'IPv4'
11 DnsServerAddress = '192.168.2.11'
The Script Resource 108
12
13 # Domain and Domain Controller information
14 DomainName = "Company.Pri"
15 DomainDN = "DC=Company,DC=Pri"
16 DCDatabasePath = "C:\NTDS"
17 DCLogPath = "C:\NTDS"
18 SysvolPath = "C:\Sysvol"
19 PSDscAllowPlainTextPassword = $true
20 PSDscAllowDomainUser = $true
21
22 # DHCP Server Data
23 DHCPName = 'LabNet'
24 DHCPIPStartRange = '192.168.2.200'
25 DHCPIPEndRange = '192.168.2.250'
26 DHCPSubnetMask = '255.255.255.0'
27 DHCPState = 'Active'
28 DHCPAddressFamily = 'IPv4'
29 DHCPLeaseDuration = '00:08:00'
30 DHCPScopeID = '192.168.2.0'
31 DHCPDnsServerIPAddress = '192.168.2.11'
32 DHCPRouter = '192.168.2.1'
33
34 # ADCS Certificate Services information
35 CACN = 'Company.Pri'
36 CADNSuffix = "C=US,L=Phoenix,S=Arizona,O=Company"
37 CADatabasePath = "C:\windows\system32\CertLog"
38 CALogPath = "C:\CA_Logs"
39 ADCSCAType = 'EnterpriseRootCA'
40 ADCSCryptoProviderName = 'RSA#Microsoft Software Key Storage Provider'
41 ADCSHashAlgorithmName = 'SHA256'
42 ADCSKeyLength = 2048
43 ADCSValidityPeriod = 'Years'
44 ADCSValidityPeriodUnits = 2
45 # Lability default node settings
46 Lability_SwitchName = 'LabNet'
47 Lability_ProcessorCount = 1
48 Lability_StartupMemory = 1GB
49 Lability_Media = '2016TP5_x64_Standard_EN' # Can be Core,Win10,2012R2,na\
50 no
51 # 2016TP5_x64_Standard_Core_EN
52 # WIN10_x64_Enterprise_EN_Eval
53 }
54 @{
The Script Resource 109
55 NodeName = 'DC'
56 IPAddress = '192.168.2.11'
57 Role = @('DC', 'DHCP', 'ADCS')
58 Lability_BootOrder = 10
59 Lability_BootDelay = 60 # Number of seconds to delay before others
60 Lability_timeZone = 'US Mountain Standard Time' #[System.TimeZoneInfo]::\
61 GetSystemTimeZones()
62 }
63 <#
64 @{
65 NodeName = 'S1'
66 IPAddress = '192.168.2.50'
67 Role = @('DomainJoin', 'Web')
68 Lability_BootOrder = 20
69 Lability_timeZone = 'US Mountain Standard Time' #[System.TimeZoneInfo]::\
70 GetSystemTimeZones()
71 }
72 @{
73 NodeName = 'Client'
74 IPAddress = '192.168.2.100'
75 Role = 'DomainJoin'
76 Lability_ProcessorCount = 2
77 Lability_StartupMemory = 2GB
78 Lability_Media = 'WIN10_x64_Enterprise_EN_Eval'
79 Lability_BootOrder = 20
80 Lability_timeZone = 'US Mountain Standard Time' #[System.TimeZoneInfo]::\
81 GetSystemTimeZones()
82 }
83 #>
84
85 );
86 NonNodeData = @{
87 Lability = @{
88 # EnvironmentPrefix = 'PS-GUI-' # this will prefix the VM names \
89
90 Media = @(); # Custom media additions that are different than the suppli\
91 ed defaults (media.json)
92 Network = @( # Virtual switch in Hyper-V
93 @{ Name = 'LabNet'; Type = 'Internal'; NetAdapterName = 'Ethernet'; \
94 AllowManagementOS = $true;}
95 );
96 DSCResource = @(
97 ## Download published version from the PowerShell Gallery or Github
The Script Resource 110
Honestly, the important bit there is the DomainName property under the NodeName = ‘*’ section.
Because DomainName applies to all Nodes (e.g. the asterisk as a wildcard), each node that the
configuration script processes will have a DomainName property applied.
Now go back and look at the $Using tricks in the configuration script. See how it’s referring
to $Using:Node.DomainName? We’re telling the Script resource to grab the configuration data
(represented by Node) and then get its DomainName property. In theory, this should return
“Company.Pri” each time, because that’s what our ConfigurationData block has for DomainName.
But keep something in mind: When the configuration script run, we’re passing it the Configu-
rationData block, right? So the computer running the configuration knows what the Configura-
tionData is. The MOF it produces will contain the literal contents of the TestScript, SetScript, and
GetScript blocks. The code within those blocks is not executed or parsed when the MOF is produced!
Those three blocks aren’t executed until the MOF is sent to a node, and that node’s LCM loads
up the MOF and begins a consistency check. So how the heck does the node’s LCM know what the
configuration data is? The answer is to look at the MOF generated by our configuration. This won’t
be pretty, and you’re actually invited to run the code yourself to get the resulting MOF. It’s actually
almost impossible to reproduce the MOF in this book, in fact, because it’s so wonky-looking.
What you’ll find in the MOF, if you look at it in Notepad or something (keep in mind it’s in the
ZIP file, available as a book “extra” in LeanPub), is that each script block - GetScript, TestScript,
and SetScript - has a serialized (e.g., translated to XML) version of the node’s ConfigurationData.
Keep in mind that GetScript, TestScript, and SetScript are all run independently, in separate
PowerShell processes. That means the three of them don’t “share” any data between them, which
is why the ConfigurationData gets serialized into each block - meaning, the MOF contains three
copies of the ConfigurationData. So when GetScript is run, for example, the LCM deserializes the
ConfigurationData back into an object, and makes it accessible via $using.Node. In fact, right in the
MOF, you can see this happen:
The Script Resource 111
1 ResourceID = "[Script]CreateNewGpo";
2 GetScript = "$Node = [System.Management.Automation.PSSerializer]::Deserialize(
Basically, when the MOF is created, PowerShell tacks on some extra code to the start of each script
block, recreating the $Node variable. $Using.Node becomes a way to refer to $Node, which turns
out to contain the ConfigurationData that was used to build the MOF. This is a cool way to pass
information into the MOF, and then use that information during a consistency check on the node
itself.
Part 6: Advanced Stuff
This is kind of our catch-all chapter for “extra information” that you may or may not find yourself
needing. We’ll touch on troubleshooting, leading-edge stuff, some under-the-hood stuff, and stuff
stuff. Note that many of these chapters will be under kind of constant expansion and revision, so if
you purchased this book on LeanPub, make sure you’re opted into “new release” notifications via
email!
Reporting
Sitting here writing this in July 2016, I have to make sure you understand that the default Report
Server ain’t much to look at. In fact, at PowerShell + DevOps Global Summit 2016, PowerShell team
leads made it clear that the Report Server was only ever intended as a sample of what you could do.
Azure, as I’ve already pointed out, takes reporting a lot further, and is a strong reason to use Azure
Automation as your pull server. Maybe all that’ll change someday, and maybe not. For now, my goal
is to help you understand what the default server is and does. Note that this chapter only applies to
WMF v5; the “Compliance Server” in v4 doesn’t work the same way, and has been discontinued.
1 $requestUri = "https://my-report-server/PSDSCReportServer.svc/Nodes(AgentId='12345')\
2 /Reports"
3 $request = Invoke-WebRequest -Uri $requestUri -ContentType "application/json;odata=\
4 minimalmetadata;streaming=true;charset=utf-8" `
5 -UseBasicParsing -Headers @{Accept = "application/json";ProtocolVersi\
6 on = "2.0"} `
7 -ErrorAction SilentlyContinue -ErrorVariable ev
That AgentId is the tricky bit. I’ve shown it here as 12345, which it will never be. It’s a GUID, and
it’s configured as part of the LCM. Each LCM makes up its own AgentId. The result of this - stored
in $request in the above example - is a JSON object.
Reporting 114
In this case, $report will often contain multiple objects, since the Report Server keeps multiple check-
ins for each node. Each of those objects will look something like this:
1 JobId : 019dfbe5-f99f-11e5-80c6-001dd8b8065c
2 OperationType : Consistency
3 RefreshMode : Pull
4 Status : Success
5 ReportFormatVersion : 2.0
6 ConfigurationVersion : 2.0.0
7 StartTime : 04/03/2016 06:21:43
8 EndTime : 04/03/2016 06:22:04
9 RebootRequested : False
10 Errors : {}
11 StatusData : {{"StartDate":"2016-04-03T06:21:43.7220000-07:00","IPV6Addres\
12 ses":["2001:4898:d8:f2f2:852b:b255:b071:283b","fe80::852b:b255:b071
13 :283b%12","::2000:0:0:0","::1","::2000:0:0:0"],"DurationInSec\
14 onds":"21","JobID":"{019DFBE5-F99F-11E5-80C6-001DD8B8065C}","Curren
15 tChecksum":"A7797571CB9C3AF4D74C39A0FDA11DAF33273349E11823855\
16 28FFC1E47151F7F","MetaData":"Author: configAuthor; Name:
17 Sample_ArchiveFirewall; Version: 2.0.0; GenerationDate: 04/01\
18 /2016 15:23:30; GenerationHost: CONTOSO-PullSrv;","RebootRequested":"False
19 ","Status":"Success","IPV4Addresses":["10.240.179.151","127.0\
20 .0.1"],"LCMVersion":"2.0","ResourcesNotInDesiredState":[{"SourceInf
21 o":"C:\\ReportTest\\Sample_xFirewall_AddFirewallRule.ps1::23:\
22 :9::xFirewall","ModuleName":"xNetworking","DurationInSeconds":"8.785",
23 "InstanceName":"Firewall","StartDate":"2016-04-03T06:21:56.46\
24 50000-07:00","ResourceName":"xFirewall","ModuleVersion":"2.7.0.0","
25 RebootRequested":"False","ResourceId":"[xFirewall]Firewall","\
26 ConfigurationName":"Sample_ArchiveFirewall","InDesiredState":"False
27 "}],"NumberOfResources":"2","Type":"Consistency","HostName":"\
28 CONTOSO-PULLCLI","ResourcesInDesiredState":[{"SourceInfo":"C:\\ReportTest\\Sample_xF\
29 irewall_AddFirewallRule.ps1::16::9::Archive","ModuleName":"PSDesiredStateConfigurati\
30 on","DurationInSeconds":"1.848",
31 "InstanceName":"ArchiveExample","StartDate":"2016-04-03T06:21\
32 :56.4650000-07:00","ResourceName":"Archive","ModuleVersion":"1.1","
33 RebootRequested":"False","ResourceId":"[Archive]ArchiveExampl\
34 e","ConfigurationName":"Sample_ArchiveFirewall","InDesiredState":"T
35 rue"}],"MACAddresses":["00-1D-D8-B8-06-5C","00-00-00-00-00-00\
36 -00-E0"],"MetaConfiguration":{"AgentId":"52DA826D-00DE-4166-8ACB-73F2B46A7E00",
37 "ConfigurationDownloadManagers":[{"SourceInfo":"C:\\ReportTes\
Reporting 115
38 t\\LCMConfig.ps1::14::9::ConfigurationRepositoryWeb","A
39 llowUnsecureConnection":"True","ServerURL":"http://CONTOSO-Pu\
40 llSrv:8080/PSDSCPullServer.svc","RegistrationKey":"","ResourceId":"[Config
41 urationRepositoryWeb]CONTOSO-PullSrv","ConfigurationNames":["\
42 ClientConfig"]}],"ActionAfterReboot":"ContinueConfiguration","LCMCo
43 mpatibleVersions":["1.0","2.0"],"LCMState":"Idle","ResourceMo\
44 duleManagers":[],"ReportManagers":[{"AllowUnsecureConnection":"True
45 ","RegistrationKey":"","ServerURL":"http://CONTOSO-PullSrv:80\
46 80/PSDSCPullServer.svc","ResourceId":"[ReportServerWeb]CONTOSO-PullSrv","S
47 ourceInfo":"C:\\ReportTest\\LCMConfig.ps1::24::9::ReportServe\
48 rWeb"}],"StatusRetentionTimeInDays":"10","LCMVersion":"2.0","Config
49 urationMode":"ApplyAndMonitor","RefreshFrequencyMins":"30","R\
50 ebootNodeIfNeeded":"True","RefreshMode":"Pull","DebugMode":["NONE"]
51 ,"LCMStateDetail":"","AllowModuleOverwrite":"False","Configur\
52 ationModeFrequencyMins":"15"},"Locale":"en-US","Mode":"Pull"}}
53 AdditionalData : {}
So you can see, there’s some basic information before you get to the monster StatusData property,
which includes a lot of detail about the node. It also contains a dump of all the configuration items
that the LCM processed, so with a large configuration, this will be a LOT of data. The StatusData is
itself a JSON object, so you could run it through ConvertFrom-Json to break that data out and make
it a bit more readable. You’ll also notice a HostName property buried in there, which is how you can
start to map host names to AgendId GUIDs. Some things to note:
• You get a StartDate property that tells you when the consistency check ran. This allows you to
find the most recent one.
• You get a DurationInSeconds property that tells you how long the check ran.
• You can tell if there’s a reboot pending.
• You can see the Status, which will be Success if the LCM finished without error; Status does
not necessarily mean the LCM is within compliance.
• ResourcesNotInDesiredState tells you which configuration settings aren’t compliant. In this
example, the Firewall resource isn’t compliant. This can’t tell you why, but it does tell you
where to start investigating. ResourcesNotInDesiredState (and the companion ResourcesInDe-
siredState) are also JSON objects, and you can convert them to read them more easily.
The AgentId
The AgentId is perhaps the most vexing part of this setup. You’ll notice, in the chapter on Configuring
the LCM, that my LCM didn’t display an AgentId. That’s because the AgentId is dynamically
generated the first time the LCM registers with a pull server. From then on, that AgentId is used to
identify the node to the pull server and any Report Server that you’ve configured.
Reporting 116
The problem is that most of us want to know the node’s host name. I’ve seen a few novel approaches
to solving this. One was a custom DSC resource which took the local AgentID, and then stored that
in a “spare” attribute of the computer object in Active Directory. A similar approach registered DNS
records, enabling you to query a host name to get its AgentId, or vice-versa. A third did basically
the same thing in a custom database set up for that purpose.
I think a broader weakness just comes from the fact that the Report Server is a “sample.” For example,
you can’t easily query it for just non-compliant nodes, or for nodes that need a reboot. You can’t
query for host names. Again, the Report Server included “in the box” just wasn’t designed to be a
robust tool. Until someone writes something better, though, it’s what we’ve got to work with.
Security and DSC
There are a lot of security concerns with DSC. Perhaps the first that jumps to mind is, “our
configurations contain a lot of important and proprietary information!” True. Also true is, “if
someone can modify our configurations, then they can control our environment.” So we’ll try and
address both of these concerns.
A Word on Certificates
Certificates play a strong role in DSC security. If you work in an organization that doesn’t yet have a
Public Key Infrastructure (PKI) set up, then you need to start there. Seriously, you can’t do this stuff
without PKI. Period. Do not even think about using self signed certificates, because they are Satan’s
handmaidens. Self-signed certificates will get you a false sense of security, but they will not, in fact,
provide you with any protection. They are evil, misleading, wrong, and probably cause cancer.
DSC uses, for the most part, three types of certificates. Now, while all certificates are technologically
the same in terms of the math and whatnot, certificates have to be “marked” for specific uses. These
“use markings” indicate how much diligence was put into the certificate’s issuance. A certificate
used for encrypting email, for example, doesn’t usually require much diligence in terms of verifying
someone’s identity. You just send the certificate to the mailbox it’s intended to secure, and assume
that whoever has access to that mailbox should have access. A code-signing certificate, on the other
hand, requires a lot more diligence. You want the certificate indisputably to identify the developer
(usually a company, not a person), so that if the code turns out to be malicious, you know who to
go kill.
DSC uses:
• SSL certificates. These require moderately high diligence not because they provide encryption,
but because they identify a server that is handing out configurations. You do not want your
nodes connecting to some random web server to obtain their configurations! The SSL certificate
ensures that the node is connecting to the right machine. For this reason, you should also avoid
“wildcard” certificates - you don’t want nodes connecting to “just any old” server, but rather
to the specific server specified in their configurations.
• Client certificates. This isn’t actually used by DSC per se, but you can certainly configure
your web server to require client certificate authentication. This prevents unknown nodes
from trying to grab your configuration data. Again - this isn’t something you set up as part of
DSC; client certificates would be issued to the node (perhaps through Active Directory auto-
enrollment), and IIS (or whatever Web server you’re using) would be configured to require the
certificate.
Security and DSC 118
• Document certificates. These are used to encrypt sensitive information like passwords in a
MOF file. As of WMF5 and later, this is basically just a normal email encryption certificate, but
it explicitly needs to be “marked” as usable for Document Encryption. This can be tough: in
Windows Server 2012 R2, for example, ADCS doesn’t provide a built in template for this kind
of certificate, so you’ll have to make your own.
So: SSL certificates on the pull server(s). Perhaps client certificates on the nodes, if you’re doing that.
Definitely document certificates for encrypting MOF data. No self-signed certificates. Got it?
1. Flip the switch that tells DSC to allow plain-text passwords. This is blindingly stupid, and I’m
not even going to show you how. It’s on the Internet, if you’re bound and determined to do the
wrong thing.
2. Encrypt the password.
Security and DSC 119
Option 2 is what we’ll be pursuing. This requires, as I’ve outlined above, a certificate - and again,
from WMF5 onward, the certificate must be “marked” for the “Document Encryption” purpose.
There are a bunch of ways to generate such certificates, and in ADCS in Windows Server 2012 R2
onward, you can pretty easily create a certificate template that’ll do this. There’s a good overview
of this encryption stuff⁸ online; I’m going to walk through the basics and hit on some of the fine
points.
⁸https://msdn.microsoft.com/en-us/powershell/dsc/securemof
Security and DSC 120
1 function Get-LocalEncryptionCertificateThumbprint
2 {
3 (dir Cert:\LocalMachine\my) | %{
4 # Verify the certificate is for Encryption and valid
5 if ($_.PrivateKey.KeyExchangeAlgorithm -and $_.Verify())
6 {
7 return $_.Thumbprint
8 }
9 }
10 }
1 $ConfigData= @{
2 AllNodes = @(
3 @{
4 NodeName = "targetNode"
5 CertificateFile = "C:\publicKeys\targetNode.cer"
6 Thumbprint = "AC23EA3A9E291A75757A556D0B71CBBF8C4F6FD8"
7 };
8 );
9 }
This Configuration Data block defines the node (“targetNode” is the computer name, here), the
location of the certificate file (public key only) on the authoring computer, and the certificate
thumbprint. Obviously, you can have a ton of these, one for each node. And yes, it really should be
one for each node - sharing certificates is like sharing contraceptives. Ew.
1 configuration CredentialEncryptionExample
2 {
3 param(
4 [Parameter(Mandatory=$true)]
5 [ValidateNotNullorEmpty()]
6 [PsCredential] $credential
7 )
8
9
10 Node $AllNodes.NodeName
11 {
12 File exampleFile
13 {
14 SourcePath = "\\Server\share\path\file.ext"
15 DestinationPath = "C:\destinationPath"
16 Credential = $credential
17 }
18 }
19 }
See where we created the credential? PowerShell will prompt for it when you run this, and you
can see where it’s used ($credential) in the configuration. You need to run this and pass in the
Configuration Data block:
So, we ran the configuration (I’d normally have this at the end of the script containing the
configuration), and passed in the $ConfigData block I created earlier (which you might read from
a separate file). This will generate a MOF for each node in $ConfigData. Because the $ConfigData
has the local (on the authoring system) path for the certificate public key, and because it also has
the thumbprint, it’ll all just magically work.
Sharing Certificates
We made a fairly strong statement earlier about not sharing certificates, and we hold to this
statement, but there are some good reasons for considering sharing certificates. Maybe you have
a web server farm, and all of those servers are to be configured identically, and you’re using
Configuration Names for the configuration. You have one MOF file to configure all the servers
in the web farm. You can’t deploy that configuration to multiple nodes that each have a different
certificate. In that case, it may make sense to use one certificate for the web server configuration
and deploy that certificate to all the web servers.
Security and DSC 122
Keep in mind that you could choose to go all the way to the opposite extreme and just use a
single certificate for all servers in your organization, and that’s a design decision with some serious
tradeoffs. On one hand, it makes certificate management really easy, but on the other hand, if your
private key gets compromised or the certificate expires without you knowing, you’re probably going
to have a really bad day.
Certificate Management
So… how do you manage all these deployed certificates? Well… that’s still an evolving story.
Honestly. A bunch of us at DevOps Camp (an event I run in August in Las Vegas for about 20 folks;
keep an eye on DonJones.com for dates) have been fooling with ideas for automating installation,
dealing with expirations and renewals, and so on, especially in non-domain environments. As
practices emerge and are refined for this, I’ll loop back and include that information in this book.
PSDSCRunAsCredential
This is a universal property that, as of WMF 5.0, can be used with any resource. When a configuration
is run, the LCM will launch the resource using the provided credential. You should follow the
guidelines above for encrypting those credentials in the MOF!
Note that in WMF 5.0, you can only use PSRunAsCredential with “real” resources - that is,
ones that are an actual compiled, class-based, or function-based resource module. You cannot use
PSRunAsCredential with a composite resource. However, that was changed in WMF 5.1, and you can
pass a PSDSCRunAsCredential to a composite resource as well. The entire composite resource will
run using the credential you provide. However, if the composite resource itself specifies alternate
credentials, then the LCM will explode and your configuration won’t run.
Digital Signing
This feature was added in WMF 5.1, and allows you to digitally sign configuration (MOF) files. If
a node pulls a configuration from a pull server, and the signature isn’t valid, the LCM will refuse
to run the configuration. This is a very good idea to implement, because it helps protect against a
compromised pull server. I know, you think, “we have firewalls, nobody can get to our pull server,”
to which I say, “defense in depth, man,” and, “that’s what the US Office of Personnel Management
thought until June 2016.” Sign MOFs using Set-AuthenticodeSignature on a machine running WMF
5.1 (the cmdlet in previous versions can’t sign MOFs).
Modules can also be signed. You have to start by creating a catalog for the module, which is
accomplished by the New-FileCatalog cmdlet. Sign the catalog file using Set-AuthenticodeSignature
- again, this all works only in WMF 5.1 and later - and put the catalog file inside the module folder.
If the module is named OurTools, then the catalog filename would be OurTools.cat, and it would
live in the OurTools module folder.\
Security and DSC 123
Then you have to tell the LCM to expect signatures - see the chapter on configuring the LCM for
that. Signatures are validated on a per-pull-server basis, and only when the LCM downloads files;
it does not re-validate signatures when performing a consistency check. You can also configure the
LCM to expect signatures on pushed configurations.
DSC in Azure
In the Introduction, and in the previous chapter, I’ve made a big deal out of the fact that DSC is a
technology, and that using it can require a lot of manual effort because, right now, we don’t have
a lot of tools built atop the technology. But lurking inside Azure Automation is a preview of how
those tools might look.
By the way, this isn’t a sales pitch for Azure; it’s a sales pitch for tools, and how they can make using
a complex technology a lot easier.
In essence, what Azure Automation provides (for the purposes of DSC) is a Pull Server in the cloud.
It isn’t the standard Pull Server that you can install on-prem; with access to the Pull Server source
code, the Azure team has been able to customize their offering quite a bit. But, at the end of the day,
it’s a Pull Server. You can copy your configuration MOFs to it, and you can drop you ZIPped-up DSC
resource modules on it. Your nodes - whether they’re running Azure, AWS, on-prem, or someplace
else - can be pointed to this Pull Server. They provide some nice authentication options to make sure
only your machines are pulling configurations, too.
You can also author your configurations right in the cloud, run them, and generate MOFs right
there. And Azure’s Pull Server supports the Reporting Server function - only they’ve enhanced it
with actual reports and dashboards, so you can use an Azure-based console to track the compliance
of your nodes (in-cloud and on-prem), generate reports, and so on.
Azure’s Pull Server runs only on HTTPS, which means your nodes can be assured that they’re
connecting to your pull server, and not some imposter.
Azure provides a lot of integration points, especially for Azure-based VMs, to make node enrollment
easier. For example, the Azure Automation DSC console lets you configure an Azure-based VM’s
LCM from a graphical interface, enabling you to get machines pointed to the Pull Server more easily.
That configuration can also be “injected” into new VMs, enrolling them automatically upon creation.
Nodes are onboarded securely, which means Azure equips them with a client certificate. This helps
ensure that only your nodes can request configurations from your Pull Server.
Azure also provides tooling for some of the more complex and difficult aspects of DSC, such as
managing credentials in MOFs. And keep in mind, you can use Azure DSC even if you don’t have
any VMs in Azure. It’s just a fancy Pull Server, although you do have different onboarding steps for
nodes that aren’t running in Azure VMs.
DSC on Linux
Microsoft offers a free DSC client - that is, a Local Configuration Manager, or LCM - for Linux.
It’s important to know that this is not the work of the PowerShell team, but rather of Microsoft’s
excellent Unix Tools team. Here’s a great getting started guide⁹, and you can access the open source
code on GitHub¹⁰.
Authoring configurations is just the same, although the available resources are obviously different.
Bundled in the LCM are the following resources:
• nxArchive
• nxAvailableUpdates
• nxComputer
• nxDNSServerAddress
• nxEnvironment
• nxFile
• nxFileInventory
• nxFileLine
• nxFirewall
• nxGroup
• nxIPAddress
• nxLog
• nxMySqlDatabase
• nxMySqlGrant
• nxMySqlUser
• nxOMSAgent
• nxOMSCustomLog
• nxOMSKeyMgmt
• nxOMSPlugin
• nxOMSSyslog
• nxPackage
• nxScript
• nxService
• nxSshAuthorizedKeys
• nxUser
⁹https://msdn.microsoft.com/en-us/powershell/dsc/lnxgettingstarted
¹⁰https://github.com/Microsoft/PowerShell-DSC-for-Linux
DSC on Linux 126
Most are written in C. It’s worth noting that DSC on Azure - that is, Azure Automation - also
supports Linux by means of this LCM client. Writing your own modules for the Linux LCM is a bit
more complex, as you can’t necessarily just write a PowerShell script (PowerShell didn’t exist on
Linux when this LCM was written, so they assumed you’d be coding in C).
The team also provides MOFs for their resources that can be installed on Windows, so you can author
(getting IntelliSense and whatnot in the ISE) Linux-targeted configurations on Windows machines.
Troubleshooting and Debugging
Getting Eyes-On
The key thing with troubleshooting and debugging is simply figuring out what’s happening. There
are a number of ways in which you can do that.
Interactive Push
Get a copy of a node’s MOF, switch the node’s LCM to Push mode, and push the MOF using Start-
DscConfiguration -Wait -Verbose (adding in the parameter to specify the MOF, of course). You’ll
get some of the best diagnostic information that way, if you can take this temporary troubleshooting
step.
You’d run this right on the node, and you’ll get a dump of every resource that’s reporting non-
compliance, along with error information.
Substitute “Analytic” or “Debug” for “Operational” to see those logs; the Debug log tends to hold
the most detailed information. You can also access these using the Event Viewer GUI tool, although
you’ll need to right-click to make the Debug and Analytic logs visible.
Troubleshooting and Debugging 128
Each log includes a Job ID, which corresponds to a single DSC operation. Once you query the
Operational log, for example, you can start to see the Job ID from the most recent consistency
check. You can then limit further queries to just that job ID, if you want. You’ll find some helpful
commands in the xDscDiagnostics module (which is available in PowerShell Gallery, so use Install-
Module to get it). One command is Get-xDscOperation, which includes a -Newest parameter that
can grab just the most recent number of operations. Another command is Trace-xDscOperation,
which can consolidate events for a given operation from all three logs.
Resource Debugging
This’ll become a full chapter at some point, but keep in mind the Enable-DscDebug and Disable-
DscDebug commands. These enable debugging of DSC resource modules, which enables you to
force resources to “break” into debug mode when the consistency check runs them. This is a neat
way to “step through” the resource module’s code and see what’s happening. You’ll usually add the
-BreakAll parameter when enabling; see the command’s documentation for more information.
and if it’s gone off the deep end - usually the result of a resource module going off and doing some
long-running task, improperly expecting interactive input, or something else - then the LCM might
not respond to your stop request. In that case, try this on the offending node:
This will remove the MOF that the LCM would have processed on its next run; you may also need
to delete current.mof and backup.mof from the same location. You can also try running Remove-
DscConfiguration, which does essentially the same thing.
The second bit - stopping the WMI service and restarting Windows Remote Management (WinRM)
- can be useful, especially on older systems. Everything in DSC actually runs out of Windows
Management Instrumentation (WMI), and in cases where it’s the underlying WMI service holding
things up, killing the process can get the LCM to unstick. You may need to restart the WMI service
manually after doing this (or reboot the node). Similarly, because DSC uses WinRM to communicate,
sometimes it’s those communications that hang things up. Restarting WinRM (or killing the process
and then starting the service) can unstick things. Thanks to Peter Burkholder for suggesting this
section!
Self-Modifying Configurations
This whole chapter grew out of a “Stupid DSC Tricks” session I did at a couple of conferences,
including PowerShell + DevOps Global Summit 2016 http://powershellsummit.org. The idea was to
try and push the boundary on DSC a little bit. One goal in particular was to create a configuration
that could download pre-requisites for the actual, intended configuration.
Here’s the scenario: you want to push a configuration down to a node. Fine. But perhaps the
configuration will contain credentials, which you’ll be encrypting of course, and so you first need
to get the certificate installed locally. Or perhaps you need to pre-stage some pre-requisite modules
or resources, and can’t use a pull server for whatever reason. Problem is, if the LCM sees that you’re
using resources it doesn’t have, it won’t even run the configuration. So you want to run a small, short,
“bootstrap” configuration first, to get everything all set up, and then run the “real” configuration.
Now, look: I’m not positioning all of this as a Good and Just Idea. This chapter is more about pushing
the edges of DSC to see what we can do. Think of it as a thought experiment, of sorts. But over the
course of thinking through this, I’ve come up with what I think are some clever ideas. So this chapter
will be less a how-to, and more a freewheeling stream-of-consciousness. Oh, you’ll see some actual
tech, too, but it’s possible that the tangential ideas will end up being the most useful.
• Pending.mof is the newest configuration the LCM has received, and is the one the LCM will
use on its next consistency check.
• Current.mof is the one the LCM is running right now.
• Previous.mof is the one the LCM used on its last consistency check.
• Backup.mof is usually the same as Current.mof, and unless a new configuration was JUST
received, will also be the same as Pending.mof.
Therefore, the LCM gets a configuration and saves it as Pending.mof. When the next consistency
check runs, this is copied to Current.mof and Backup.mof, and the consistency check starts. When the
check finishes, Current.mof is copied to Previous.mof. If there’s a failure of some kind, Pending.mof
won’t exist, thus preventing the “failed” configuration from being picked up again.
So we can “inject” a configuration by simply writing a MOF to Pending.mof and waiting for the
LCM to run its next consistency check.
Self-Modifying Configurations 131
Additionally, Metaconfig.mof can contain the LCM’s meta-configuration. Replacing this file is a way
to “inject” a new meta-configuration, thereby reconfiguring the LCM. Note that Metaconfig.mof file
can only contain the MSFT_DSCMetaConfiguration and the MSFT_KeyValuePair sections. But
MOF files produced by running a PowerShell configuration script may contain an OMI_Configu-
rationDocument section, which must be removed.
1 Configuration Bootstrap {
2
3 Node whatever {
4
5 Prerequisite One {
6 Property = Value
7 Property = Value
8 }
9
10 Prerequisite Two {
11 Property = Value
12 Property = Value
13 }
14
15 File [string] #ResourceName
16 {
17 DestinationPath = pending.mof
18 Ensure = Present
19 SourcePath = \\server\folder\source.mof
20 DependsOn = [Prerequisite]One,[Prerequisite]Two
21 }
22
23 }
24
25 }
So in broad strokes, the idea is to have one or more resources - “Prerequisite” resources, in this
example - set up whatever you need on the node. A final resource injects the “real” MOF. In this
Self-Modifying Configurations 132
case, I’m using the File resource to copy the MOF from a file server, so the presumption is that the
MOF has already been created and staged there.
Options
So, the basic theme is to create a “bootstrap” configuration, which will set up any prerequisites, and
then somehow engage the “real” configuration. You’ve got a lot of options for getting there.
1. Set up a pull server where the bootstrap configurations reside. Configure the node LCM to
pull from that server. The bootstrap then modifies the LCM to pull from a different pull server,
where the “real” configuration resides.
2. Push the bootstrap to the node, and the bootstrap configures the LCM to pull the “real”
configuration from a pull server.
3. Have the node pull the bootstrap configuration by GUID. The bootstrap then reconfigures the
LCM to use configuration names, allowing it to pull the “real” configuration from the same
pull server.
4. Push the bootstrap to the node, and have the bootstrap directly inject a new, “real” configura-
tion. This is basically what my above example does.
5. Push the bootstrap to the node, or have the node pull it. This does whatever it needs to do,
and then runs a PowerShell configuration script, producing a MOF. This MOF is the “real”
configuration, and could be written locally (i.e. injected), or written to a pull server.
All of this “reconfigure the LCM” stuff can be done by having the bootstrap write a new meta-
configuration MOF directly on the node. The LCM will pick up the “injected” meta-configuration
MOF pretty quickly, thereby reconfiguring the LCM.
My fifth suggestion in the list above offers some interesting possibilities. Because, in that situation,
the configuration script is running on the target node, the script can include logic that dynamically
adjusts the final MOF based on local conditions. This means that a single complex configuration
script could, when run on different nodes, produce entirely different MOFs. And as I noted in the
list, the MOF could be written right back to a pull server, overwriting the “bootstrap” configuration
and allowing the node to pull the new, dynamically produced MOF.
The possibilities really are endless, which is why I’ve found this to be such an interesting line of
experimentation.
A Problem to Consider
One problem with “injecting” a MOF into the local node is the encryption introduced in v5. The
deal is that the various MOF files stored by the LCM - including pending.mof, which is the MOF
it’ll process on its next consistency check - are encrypted on disk. Both Start-DscConfiguration and
Self-Modifying Configurations 133
Publish-DscConfiguration know how to encrypt the file correctly (technically, the encryption is
done on the target node, not before the file is sent to the target node), but I’m not currently clean on
how you might manually encrypt the MOF. If pending.mof isn’t encrypted correctly, then the LCM
won’t read it, so your “injected” configuration won’t work.
I’ve toyed with writing the MOF locally, but not in pending.mof, and then using Publish-DscConfiguration
to send the MOF to the local host. So, the bootstrap configuration would create a local MOF (copy it
from someplace else, create it locally, or whatever), and then basically “publish” the MOF to itself.
I’ve run into some credential problems with this, though. The bootstrap is being run by the LCM,
which runs as SYSTEM, which doesn’t have permission to remote to itself. My only solution has been
to provide a credential for the publish operation - but that necessitates encrypting the credential in
the MOF. In order for that to work, you have to have the decryption certificate already staged - and
getting the certificate there is one of the things I’d want to use the bootstrap for. So it’s a bit of a
Catch-22. I’m still playing with it, and I believe it should be possible to encrypt the MOF directly on
the node, without having to “publish” it.
And in addition to deploying prerequisites, the ability - as I outlined already - to have a PowerShell
configuration script run on the node opens up a world of interesting possibilities for producing more
“locally aware” MOFs that adjust themselves based on local conditions.
Anyway… it’s all an ongoing experiment, which is why I included this so near the end of the book,
but I think it’s a useful way of thinking about how DSC can be twisted to solve different needs. And
certainly, it’s a demonstration of how important it is to understand how DSC works under the hood!
The Scaling Question
“How can we make DSC scale?” is a question that I’ve struggled to help people answer. It wasn’t
until an early-adopter of this book emailed me, suggesting I answer the question in this book, that
I started to wrap my head around what I think is a good narrative. He asked:
Don, Something I know we would love to see is how to scale DSC to hundreds of
nodes. What keeps coming up for us is where/how to keep our “raw” configuration
data. Something that manages node information (name, IP, domain, standardRoleA,
standardRoleB, roleC, roleCparam1, roleCparam2). Put it in a database (but we think it
might not be flexible enough) or json files? Is it really just Puppet or Chef? Seems like
there is a space out there for what we think we’re looking for - and maybe an advanced
chapter with your thoughts down the road. Cheers, Joe
This winds up being something I think I touched on at the beginning of the book, but it’s an important
narrative. It’s also complex - so I hope you’ll bear with me. Also, and this is important, it’s still
evolving. What I’m going to try and share is the long-term vision, which is steadily coming to
fruition, and I hope you’ll understand that the current “missing bits” won’t stay missing for long.
Answer 1: Tooling
What we need is tools. And, as Joe suggests in his question, those tools might already exist in
the form of Chef, Puppet, and so on. Or, you could roll your own, creating a database and some
The Scaling Question 135
kind of management front-end for it. That latter suggestion isn’t so far-fetched, either; back in the
PowerShell 1.0 days, MySpace.com essentially rolled their own configuration management system
that accomplished a lot of what DSC aims to do.
Microsoft hasn’t yet produced their own tooling for this, although Azure Automation - more so than
System Center - is starting to take steps in that direction. But I’d argue that Answer 1 isn’t the right
answer. In fact…
Enter Containers
With the rise of containers, this story becomes even more important. We stop loading multiple roles
on a VM, and instead run everything in a container. Granted, we can’t run everything that way
today, but this story is the reason to move in that direction. DHCP server quit responding? Screw
it, kill the thing and spin up a new container - it’ll take 30 seconds. “High availability” becomes
something you worry about in the storage and networking layers, not in the “client interface” layer
- because you just spin up new containers as needed. And with containers spinning up and down all
the time, you especially don’t care about unique information like IP addresses and names. Do cows
have names? No. Well, maybe they think they do, but I don’t care.
Rock-Solid Infrastructure
This means your infrastructure - DHCP, DNS, and so on - needs to be solid. And those are easy to
make solid, especially DNS. And frankly, DNS is really the key. With IPv6 in particular, computers
are more than capable of making up their own IP addresses, and that’s fine. This obsessive need
to micro-manage IP addresses is outdated and insane. Want to remove a failure point from your
environment? Start using IPv6 and ditch DHCP. Let computers make up their own addresses and
register them with DNS. That’s how we’ll find them. Give computers a DSC resource that “registers”
each computer’s “role” in appropriate groups or pools or whatever, so we can manage them by role.
Yeah, I’m getting a teeny bit ahead of the state of the art, here - but you see where I’m going. We
don’t need to know more about the details of our IT estate. We need to know less, and we need to
have processes that work with less detail. Let the computers keep track of the details, and we’ll just
paint the big picture for them.
don’t fix cows, you eat cows. And then buy new cows. So DSC plays an important role in defining
“what a machine should look like for x role.” But you don’t need a big database keeping track of all
those configuration fragments and documents; you just need them in a repository or gallery that
your monitoring system can access.
Orchestration and monitoring is where our industry is going. And it isn’t so that you can better
automate the processes you have. It’s so you can delete the processes you have, and adopt new
processes that require less input, less granular human interaction, and less intervention. The reason
people have trouble fitting DSC into their existing processes is because those processes are outdated,
and DSC (and technologies like it) are trying to elevate the conversation to a different level.
Yes, it’s going to take a lot of work and evolution to get to this end-game vision - but this is where
the industry (not just Microsoft) is going.
Load balancer updated, nobody’s the wiser. And nobody cares about those VMs, because there is
nothing unique about them that any human being knows. They’re cattle.
And you can absolutely run almost every computer in your environment this way. Yes, I know
you’ve got some odd legacy stuff that might not be able to hit this exact model, but you can
certainly implement a lot of this model, even with old applications running in VMs. And you can
certainly move toward this model going forward. You actually have most of the tools you need, and
PowerShell can help you glue them together to achieve exactly this.
And that, ultimately, is what DSC is meant to help build.
LCM and Pull Server Communications
I find that a lot of troubleshooting is easier once you know what’s supposed to be happening
under the hood. So, I set up an unencrypted pull server and a node, and packet-captured their
communications. You can do the same in a troubleshooting scenario. Note that I focused on Pull
Server Model v2 communications, meaning this isn’t applicable to a WMF4 client. Incidentally,
you can check out https://github.com/powershellorg/tug, in the References folder, for copies of the
WireShark traces I used to figure all this out.
By the way - doing this kind of sniffing is the only reason I will switch a Pull server to HTTP, and
I do so only on a closed-off virtual network.
Node Registration
Whenever a node first contacts a pull server, it needs to register, using the RegistrationKey data you
provide in its LCM configuration. This happens per pull server and per reporting server, even if you
have them all running on the same server machine. Like most of these communications, the node
will send an HTTP PUT request to the pull server. The URL will be in the format:
1 /PSDSCPullServer.svc/Nodes(AgentId='91E51A37-B59F-11E5-9C04-14109FD663AE')
Note that the AgentId is the one configured in the LCM; a brand-new LCM might not show one
when you run Get-DSCLCMConfiguration, but after applying a meta-configuration to the LCM, an
AgentId will always exist and be visible. The body will contain a JSON (JavaScript Object Notation)
object:
1 { "AgentInformation":
2 {
3 "LCMVersion": "2.0",
4 "NodeName": "hostname",
5 "IPAddress": "ip_address(1)"
6 },
7
8 "ConfigurationNames":
9 [ (array of strings) ],
10
11 "RegistrationInformation":
12 {
13 "CertificateInformation":
LCM and Pull Server Communications 141
14 {
15 "FriendlyName": "name",
16 "Issuer": "issuer",
17 "NotAfter": "date",
18 "NotBefore": "date",
19 "Subject": "subject",
20 "PublicKey": "key",
21 "Thumbprint": "thumbprint",
22 "Version": "int"
23 },
24 "RegistrationMessageType": "ConfigurationRepository(2)"
25 }
26 }
(1) IP Address can be a semicolon-delimited list of IP addresses, including both IPv4 and IPv6.
(2) Will be “ReportServer” when registering with a Reporting Server; will not include Configura-
tionNames in that case.
The certificate information included in the above is a self-generated certificate created by the LCM.
By providing the pull server with a public key, the pull server can now encrypt things that the
LCM, and only the LCM, will be able to decrypt. This certificate information isn’t used in the native
Windows pull server, but the Azure Automation pull server does use it. After initial registration, this
certificate is used as a way to authenticate clients. The native pull server stores the information, but
doesn’t use it.
It’s really important to know that this is the one request sent via HTTP PUT, not by POST. The
node expects an HTTP 204 reply, with no content in the body of the request. It simply indicates a
successful registration; a status of 400 (Bad Request) indicates a malformed request. It’s possible for
a Web server to block PUT requests, while allowing POST requests. Such a configuration would
prevent new registrations.
The LCM actually performs this registration each and every time it checks in with the pull server.
However, given that the LCM has created an AgentId now stored on the Pull Server (which Get-
DSCLocalConfigurationManager will now show), the node inserts the RegistrationKey into the
HTTP request headers:
The “xxxxxxxx” is where the authorization information goes - but it isn’t just the RegistrationKey
that you configured for the node. It consists of:
1. The body of the HTTP request, digitally signed using the node’s self-generated private key.
Bear in mind that the body of the request contains the public key - the pull server can therefore
validate the signature. The current date - included in another HTTP header - is added into the
signed information.
LCM and Pull Server Communications 142
2. The signature itself is then run through an HMAC hash process, using the RegistrationKey.
So in order to validate the “Authorization:”” header, the pull server must repeat the signature process,
and then create the HMAC using each RegistrationKey that the pull server knows about. If one of
the the results matches what the node sent, then it is considered authorized.
This Authorization header is re-sent with every registration packet, which happens every time the
node runs a consistency check. However, remember that the node doesn’t actually “remember” that
it successfully registered in the past, and the pull server isn’t obligated to re-check the Authorization
header. My understanding is that, for the native pull server, the Authorization header is ignored if the
AgentId is known to the pull server. Given that the AgentId - a GUID - is pretty hard to guess, it seems
that on the second-and-subsequent registrations, the AgentId alone is sufficient to re-authorize the
node. That isn’t the case on Azure Automation’s pull server, which as I’ve mentioned relies on a sort
of client certificate authentication, based on the last certificate the node sent.
The node’s self-generated certificate is good for one year, is automatically renewed or regenerated,
and is sent to the pull server on each registration. After authorizing the node, the pull server should
take whatever certificate information is in the registration request as canonical, and store it.
Node Check-In
This occurs each time the node runs a consistency check.
The protocol specification states that the server can respond with “Ok” (you’re fine, don’t do
anything); “Retry”, “GetConfiguration” (your config is out of date, grab it again), or “UpdateMeta-
Configuration.” However, that last one, in my tests, generates an error on the LCM, stating that it’s
not a valid response.
The URL looks like this: Nodes(AgentId='91E51A37-B59F-11E5-9C04-14109FD663AE')/GetDscAction
The body of the request will contain a JSON object like this, when the server is configured to use a
single configuration (e.g., not partials):
1 {
2 "ClientStatus": [
3 {
4 "Checksum": "checksum",
5 "ChecksumAlgorithm": "SHA-256"
6 }
7 ]
8 }
For nodes configured to use partials, this will look a bit more complex:
LCM and Pull Server Communications 143
1 {
2 "ClientStatus": [
3 {
4 "Checksum": "checksum",
5 "ConfigurationName": "name",
6 "ChecksumAlgorithm": "SHA-256"
7 },
8 {
9 "Checksum": "checksum",
10 "ConfigurationName": "name",
11 "ChecksumAlgorithm": "SHA-256"
12 }
13 ]
14 }
The difference is that, with a single configuration, the server is expected to know what the
configuration name is, because the node supplied it at registration. So the server must store that
in its database. For partials, the node must supply the configuration names, since each will have a
unique checksum. There’s actually intent behind the single-configuration check-in not including the
configuration name, and it’s so that the server, if it wants to, can assign a different configuration than
the one the client expected. The native pull server feature in Windows doesn’t do this, but Azure
Automation’s pull server can. The idea here is to be able to supply a new, central configuration
without having to tell the node. After all, the node shouldn’t care - it should regard any MOF on the
pull server as “authoritative” and just do what it says.
The server responds with an HTTP 200, and a JSON response:
1 {
2 "odata.metadata": "http://server-address:port/api-endpoint/$\
3 metadata#MSFT.DSCNodeStatus",
4 "NodeStatus": "Ok",
5 "Details": {
6 [
7 {
8 "ConfigurationName": "config-name",
9 "Status": "Ok"
10 }
11 ]
12 }
13 }
As near as I can tell, only the top-level NodeStatus matters. When testing with partial configurations,
Details was often blank.
LCM and Pull Server Communications 144
Requesting a Configuration
Once a node knows that it needs a new MOF, it requests one. For nodes using partial configurations,
a separate request is sent for each. URLs take this form:
1 /Nodes(AgentId='agent-id-guid')/Configurations(ConfigurationName='desired-name')/Con\
2 figurationContent
There’s no special information in the request headers apart from the standard ProtocolVersion (2.0)
and Host headers. The body of the request is empty, as all needed information is in the URL. The
server responds with 404 for a missing resource, and with a 200 for successful queries. In the event
of a successful query, the server response is an application/octet-stream containing the requested
MOF. This means the MOF is transmitted as a binary object, rather than as a stream of plain-text.
Requesting a Module
When a node has been configured to pull modules - which it can do from a “normal” pull server, or
from a pull server that’s only hosting modules - it submits this URL:
1 /Modules(ModuleName='module-name',ModuleVersion='x.y.z')/ModuleContent
You’ll notice that none of the above includes the AgentId, which might make you wonder if it’s
possible to simply grab modules from a pull server by guessing module names. It isn’t; the AgentId
is actually passed in the HTTP headers for these requests:
1 AgentId: xxxxxxxxx
The pull server should validate the AgentId and make sure that the request is coming from a known
(registered) node. As with a MOF request, the body contains no information, and the server responds
with 404 or 200. In the case of a 200, the response is an application/octet-stream with the desired
module’s ZIP file.
Reporting
When configured to use a Report Server, nodes will routinely - often several times in one consistency
check - report in. The URL looks like this:
1 /Nodes(AgentId='agent-id-guid')/SendReport
LCM and Pull Server Communications 145
The body of the request is an extensive JSON object containing the entire report for that node. The
chapter on Reporting contains more details on what this object contains. Note that it’s possible for
a report’s body to exceed the size that the web server permits for requests; in that case, you need to
reconfigure your web server.
One item included in the report is the “CurrentChecksum,” which is included for each configuration
that the node processes, and matches the contents of the MOF checksum file on the pull server. On
its next check-in, this checksum will be sent.
The server responds with a 200 for successful reports, and actually sends a JSON response in the
body:
1 {
2 "odata.metadata": "http://server-address:port/api-endpoint/$\
3 metadata#MSFT.DSCNodeStatus",
4 "value": "SavedReport"
5 }
So, you know, you at least know the report got through ;). The server can also respond with 400 (bad
request) if it didn’t accept the report, which usually means it didn’t recognize the AgentId as one
registered with it.
The Database
The Pull Server’s Devices.edb database essentially holds two tables. One, RegistrationData, includes:
• AgentId
• LCMVersion
• NodeName
• IPAddresses
• ConfigurationNames
• StartTime
• EndTime
• LastModifiedTime
• Id (the AgentId)
• OperationType
• RefreshMode
• LCMVersion
LCM and Pull Server Communications 146
• ReportFormatVersion
• ConfigurationVersion
• IPAddresses
• RebootRequested
• Errors (a text field)
• StatusData (a text field containing JSON data)
Known Problems
Here, I document some of the known problems folks have run across, along with what solutions I
know of.
maxEnvelopeSize Errors
DSC uses WS-MAN to send data between computers - such as when deploying a MOF in push
mode to a node. WS-MAN has some configuration limits (exposed in the node’s wsman:/ drive via
PowerShell), including maxEnvelopeSize. As a rule, you should have this property set to roughly
double the size of the MOF you plan to deploy.
1 Job {%JobID%} :
2 Http Client {%AgentID%} failed for WebReportManager for configuration The attempt to\
3 send status report to the server https://johntestx02/PSDSCPullServer.svc/Nodes(Agen\
4 tId={%AgentID%}')/SendReport returned unexpected response code RequestEntityTooLarge\
5 ..