0% found this document useful (0 votes)
120 views42 pages

AD 2012r2 Sites

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views42 pages

AD 2012r2 Sites

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Course Transcript

Microsoft Windows Server 2012 R2: Server


Infrastructure - AD DS
Active Directory Site and Domain Controller Design
1. Active Directory Site Design: An Overview

2. Active Directory Site Planning and Design

3. Active Directory Site Links

4. Domain Controller and Global Catalog Placement

5. Flexible Single Master Operations

Branch Office Support Design


1. The Read-Only Domain Controller

2. Domain Controller Virtualization

3. Planning an Active Directory Site

1 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Active Directory Site Design: An Overview


Learning Objective
After completing this topic, you should be able to
recognize the characteristics of Active Directory sites

1. Introduction
Hi there, and welcome to Microsoft Windows Server 2012 R2: Server Infrastructure - AD DS.

Jason Yates is your Microsoft Certified instructor for this course. Jason will be joined by Jacob Moran
later in this course.

[Jason Yates is a certified Microsoft instructor holding multiple Microsoft certifications including MCSA
(registered) Windows Server 2012; MCITP: Enterprise Desktop Administrator on Windows 7; MCITP:
Server Administrator on Windows Server 2008; MCITP: Enterprise Desktop Support Technician on
Vista; MCTS: Windows Server 2008 R2, Virtualization; MCTS: Windows 7, Configuration; MCTS:
Windows Server 2008, Active Directory; MCTS: Windows Server 2008, Network Infrastructure;
MCTS: Windows Vista, Configuration; MCSE (registered) (2003, 2000, and NT); and MCSA
(registered) (2003 and 2000). Jacob Moran is a certified Microsoft instructor holding multiple Microsoft
certifications including MCITP: Windows Server 2008 Server Administrator (registered) and Active;
MCTS: Windows Server 2008 Application Infrastructure (registered) and Network Infrastructure;
MCITP: Enterprise Desktop Support Technician on Windows 7 (registered); MCITP: SharePoint
Server 2010 (registered); MCTS: Microsoft SQL Server 2008 Implementation and Maintenance
(registered) and Database Administrator (registered); MCITP: SQL Server 2005 Database
Administrator (registered); MCTS: SQL Server 2005 (registered), MCTS: SharePoint server 2007
(registered) and Windows SharePoint Services 3.0 (registered); MCTS: Windows Vista (registered)
and Windows 7 (registered); MCSE (registered) (2003, 2000, and NT 4.0); MCSA (registered) (2003
and 2000); MCSA (registered) Windows Server 2012; MCDST (registered), MCDBA (registered)
(2000 and 7.0), MCT (registered); CCNA (registered), CCS (registered); and CompTIA A+
(registered), Network+ (registered), Security+ (registered), and CTI+ (registered). The course goal is
to design an AD DS Physical Topology in Microsoft Windows Server 2012 R2]

Hi, my name is Jacob Moran and I'm an MCT and an MCSE in Windows Server 2012. You know, as
administrators managing Active Directory, we are critically concerned with how data replicates. We're
concerned about latency and throughput. Am I able to get the information I need from a logon to
being able to search Active Directory for various different records in a timely fashion? Am I
bottlenecking certain network pipes because of the replication that's going on? How is logon going to
be transmitted? And much of that has to do with sites. Remember a site is a well-connected, high-
speed network that Active Directory is able to recognize is present and holds a domain controller.

Now, as we build out sites, we're faced with fundamental choices with those domain controllers, with
concern to Flexible Single Master Operation roles, global catalogs, maybe Universal Group
Membership Caching, Read-Only Domain Controllers, filtered attribute sets, branch cache. All of
these are playing a role in meeting the needs of low latency, fast responsiveness balanced with up-to-
date information, and trying to avoid clogging our network with replication of data that isn't needed for
certain locales.

2 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

2. Active Directory sites


So our Active Directory network may consist of one domain or multiple domains, but across the
forest, we're going to define our sites and subnets and site links, right, those are forest-wide shared
objects. Now, Jason, as I look at this schematic here, I see that we've got Sydney, London, New York,
Chicago, San Francisco, I've got five different locations, and yet I seem to only have three sites. Now
why would an administrator have defined it that way? Well there's a lot to take into consideration
when identifying the site boundaries. And again, remember this has to do with determining, first of all,
a site is an Active Directory object, so its purpose is to service Active Directory of functions. So when
you think of a site, you really want to be thinking what is the site for? The site is for domain
controllers. The site is for to help optimize user logon which Jacob described earlier. So, when a user
logs on, they can find a domain controller that's near them, and then the other reason that sites are
important is for, maybe, other Active Directory aware applications that might be site aware. So when
I'm thinking about, "how do I create my site boundaries?" One of the things I need to think about are
those three things, and not just one of them, not just replication, but I'm also want to be thinking about
user logon and, maybe, any Active Directory aware applications. The second thing I want to be
thinking of is how are they connected? What is the network map? So the first thing I want to do is I
want to create a map of my actual network and my locations, and I want to consider those actual
connection speeds and latency and bandwidth. And for those areas that are well connected and
those areas where I'm not worried about, maybe, users logging on from New York to Chicago
because the DCs in Buffalo's locations are well connected and users can get to either one. Well then,
I can make it a site boundary that includes both Chicago and New York. So it has a lot to do with my
network and the physical locations and my physical network topology and it has a lot to do with the
Active Directory services that are going to utilize those site boundaries. Now, Jacob, when I create
multiple sites, the other thing I'm going to want to do is I want to show how those sites are connected
with site links, but I can do that with, maybe, one link or two links or three links, so what's the benefit
or distinction, if I got three sites, does that mean I have to have three links or can I use one link and
why? Well as we're building out our links, what we're doing is we're enabling adjacent replication,
alright, so a link that includes two sites is giving those sites permission to have domain controllers
from one side replicate to the domain controllers in the other side. If I have three sites that are joined
together by one common link, what I'm saying is that any one of these member sites can directly form
an adjacent replication relationship with any of the other sites, and there are services behind the
scenes that will establish those connections for me automatically once the link is in place, because
there's only one link. What I'm saying, though, is that there is one replication schedule and one
replication interval that is going to govern them all, right, remember that's the key thing about a site
link. It indicates the ability to replicate according to the timeframe that you allow and the frequency
that you allow. So do they attempt to find out if there's anything new, seven days a week, every three
hours like the default or do I change that one site link so that they will all follow the same rules of
replication and when they establish connection objects between sites, they say, oh, I am going to
replicate every hour, you know, month, typically again, seven days a week. I could establish two if I
wanted to, two site links that connected all the three sites, that's not even in this diagram, but if we
have one site link and then we had another site link that they're joined again, I could have different
schedules. I could have a schedule for the weekends and a schedule for the weekdays. Now, let's
take a look at two site links. What I'm doing if I establish two site links is most likely indicating the
specific directional flow that I would like replication to occur, so, in other words, if I want North
America to replicate to Europe and Europe to replicate to Australia, then I should establish site links
to North America and Europe, and establish the second site link between Europe and Australia, and
then that's indicating, assuming that we haven't done anything interesting with costs, that's indicating
the...and again if that's the only option in town, I'm indicating the preferred way to establish those
connections. And so you know exactly how the replication is going to step. If you...if someone in New
York makes a change, creates a new user, it will be added in New York, replicated intrasite high-

3 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

speed within North America, then replicated according to your rules, and you get to set up a specific
set of rules between North America and Europe so they will replicate with each other according to
those rules, maybe, it's every three hours. Once that's been received there, there's a separate link
established between Europe and Australia, so Sydney is not going to get those changes until the site
link rules between Europe and Australia that govern that replication object, say okay, you know what
it's been an hour and a half. Now, it's time to check to see if there's anything new. It checks and now
we download that to the Sydney site. So this enables you to construct a very careful flow. Now, we
establish three site links. Once again, every site has the possibility of establishing our replication
adjacency with every other site, but now, we allow for the potential of a failover solution, right. So if A
site isn't available, then replication objects can be built according to the other objects' path and you
get to set up unique rules for every one of those. I mean, we could have done that with one site link
that they were all joined to. If we create three site links to stay...to essentially enabling the same
communication between all objects, they can each have link dependent set of interval and schedule
that governs their behavior, and I can still control what is my preferred directionality by using site link
costs to govern which site links are preferred compared to what other site links to replicate data for
one location to another. And so, those are going to be the tools that we use. And this is what allows
administrators to have that ability to put things in the way that they want and have a plan for how
replication is going to occur. It means that we're doing things at the site level, not the domain
controller level. And I think that's the real key thing here is, I don't want to, as an administrator, have
to manually create every connection object between every specific domain controller and every other
specific domain controller. I don't want to have to do that web of interconnections and the build-in
services and processes that are in Active Directory that we're going to be talking about next handle
this automatically, but only because we've given them the...we've entitled them to that capability by
establishing the sites and the site links to govern how they're going to operate. So let's talk about
some of those replication processes next.

[Heading: Active Directory Sites. Active Directory replication occurs along site links that join sites. The
example shows three sites – the North American site, the European site, and the Australian site. The
North American site has three site locations – Chicago, San Francisco, and New York. The European
site has one site location, London and the Australian site has one site location, Sydney. These five
site locations are joined by site links.]

3. Site planning and design


So Jason, here is we're talking about replication, right. We said that there is intersite and intrasite
replication. Now, the intrasite replication, that happens, say in a particular location, right. So we're in
Milwaukee, right, and so we got our Milwaukee servers. They're there. They're ready to replicate.
What are some of the services that I would need to be aware of that are establishing that may...that
are making that ring happen within that location between those five domain controllers. Active
Directory replication is fascinating. It uses multiple services to make decisions and to build replication
relationships with domain controllers based on the information that the administrator provides. Within
a site, there are some assumptions made. If you have a group of domain controllers that share the
same site, Active Directory assumes that they're on well-connected networks. They assume that
there is no latency, really, that it can affect replication. So there's a component called the Knowledge
Consistency Checker. That's one of those services, that KCC, and its job is to build what are called
Connection Objects, a connection object is a relationship. It's like a kind of a dating match kind of
thing. This DC goes with this DC and this one goes with this one, and it makes decisions. And what it
does is it ensures that changes that occur on any one of the domain controllers is properly replicated
across the site, within the site in a very timely fashion. And it's a multimaster which means Active
Directory can receive changes on any domain controller for the most part. There's a couple of
exceptions to that, but for the most part, you can create a user on any domain controller and the KCC

4 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

will ensure that will get replicated within the site in almost near instantaneous now. It used to be a 15
minute maximum interval every five minutes or so. Now, it's within seconds because networks are so
much better than they were back in the 2000 days and so the KCC helps take care of that to ensure
that, you know, that information is thoroughly propagated and it does so quickly. Jason, as we're
talking about replication and again, we've got these objects. They're replicating with each other within
a site, but is the KCC the only game in town? I mean, is that also what is taking care of the
replication, say, you know, between what is going on over here in Maryland and New Jersey, and
what is going on over here in Milwaukee? I mean is it...is that the same service or are there other
things were at play here as well? Well, the KCC gets some help from a couple of other services when
it needs to reach out beyond its site. So there's also what is known as the ISTG, the Intersite
Topology Generator. And one of the jobs of the ISTG is to identify what are known as bridgehead
servers and that is, we don't want...imagine two sites and each site has ten domain controllers and
replication within that site, well, we want that to be redundant, just in case one DC is little slower to
respond and we want changes to occur really quickly, but if we have two separate sites and so we're
talking now about replication between them. Well, the assumption now that we have is, well, we don't
want all ten domain controllers creating replication relationships with the other ten. We don't want a
100 connection objects, and especially if they are separate sites because, well, the assumption is that
is a separate site because they're not well connected, right. So that's where we have the ISTG. The
ISTG identifies the best server representing each site to replicate changes from that site to the other
site. So you have a change that's introduced within site A. It gets replicated rapidly within site A and
then the bridgehead server will share that change from any domain controller across that site link to a
replication partner on the other side, the corresponding bridgehead server, and that bridgehead
server will then share with the other nine domain controllers in its location. Now, when you have
multiple sites, so you have multiple possible bridgehead servers, what's interesting about the newer
version of Active Directory is in the newer versions of AD, the Intersite Topology Generator will load
balance that bridgehead responsibility. So you might have a site that has more than one bridgehead
server because it's replicating with more than one site. So let's look at some scenarios where we can
see how this plays out, okay. So on tab two, you can see that we've got an Active Directory logical
environment that is based on three different domains. So we got a forest root, we got a couple of child
domains in here. Now, physically, they may be located in the same building or within the same high-
speed networks. So, in the map here, it looks like they're regionally distributed, but according to
Microsoft best practices, we're assuming that they're within, say less than 10 millisecond delay, or
they've got a throughput of 10 megabits or greater. That's the kind of throughput that we're assuming
if we have a well-connected site. So we have domain controllers in a well-connected network. They're
all well connected even though they belong to different domains. They're all well connected. That
means that I have three domains, but I have one site to rule them all. There's no need to build a
second site. That would only slow things down. Intrasite replication is going to ensure that all the
domain controllers of the same domain share that information very quickly. Domain controllers
between different domains would only share schema configuration and global catalog information,
and because they're in the same site, they'll do it very quickly, alright, that is totally fine and that is the
right answer. If you're taking a Microsoft exam and there is not a reason to configure a second
domain. There is no need to control replication. There is no need to localize logons because it's all
considered one high-speed network.

[Heading: Replication. The example shows three sites with five domain controllers in each site.
Replication that occurs within each of the location is intrasite replication and is automatic. The
replication that occurs between the locations is intersite and occurs via configurable bridgehead
servers. The example then shows a single forest root with three domains existing in three physical
remote locations with fast connections – Berkeley, San Francisco, and Oakland. The three domains
are in the same site and are connected through automatic intersite AD replication.]

5 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Let's look at another example. We take a look at tab three. You can see that we still have one
domain, I'm sorry, one forest with three domains, but now it looks like we didn't pay as much for our
links, alright. We did not get the same high-speed links between all of our locations. Our throughput
and our response times are less than 10 milliseconds or 10 megabits per second. So, at this point, we
say, you know what, I need to go ahead and configure three different sites that describe the physical
locations that are present in the forest and, really, you want to think of it that way. I have locations in
the forest where domain controllers are present. I don't really care what domain they were in. Maybe,
they were all in the same domain. Maybe, they belong to different domains. As far as I'm concerned,
that doesn't really matter. My concern is ensuring that the replication follows the rules that I want, and
that user logons are localized as efficiently as possible or Active Directory integrated applications like
DFS or Exchange, right. So again, slower latency, three domains now at three sites, and again, this
isn't a bad model, alright, this says we have the separate administrative boundaries of three domains.
We have the separate replication and logon localization boundaries of three sites. Do they happen to
overlap? Yeah. That means the only thing that's going to go across these site links, it's going to be
schema configuration and the global catalog information, right, so that tends to be pretty efficient in
terms of how you're managing your forest, recognizing those slow links. Let's take a look at our fourth
tab. Here we've got a single forest and a single domain. Just one domain, but again, within that
domain, some of my domain controllers are located remote to others, right. I've got, maybe, this is
that branch office located in Alameda. It's a branch office with enough users to support a local domain
controller, but I don't want that location to be undergoing the constant replication, and I want the users
in Alameda to always log on to a local Alameda server. How do I ensure that, that happens? I put it in
a separate site, so, if either of those are the considerations, localization of the logon and application
experience or the discrete control of the replication mechanism, you build a separate site, happens to
be for one domain, so this in one domain split into multiple sites, okay. If I build another domain in the
forest, does that mean I'll have to build another site? No. It can integrate into this existing forest site
topology that I built unless the new domain happens to be across a WAN link and a slow link
connection, and so I'll need to define a new site at that time. In our fifth example here, we have a
single forest, three well-connected domains existing in the same location. Three sites are required
and intersite replication to accommodate control over replication, manage inter-domain security and
services within a single forest. Can you read through that? That makes sense. What we're saying is
we're well connected, but we built separate sites anyway. And the reason why is because we wanted
the control over the replication mechanism. We didn't want to leave it to the KCC to just do whatever
it wanted to establish links with whoever we want. We have firewalls in place that only allow this
server to talk to this server, and so when we've got those kind of tight controls in place, it's...we may
be asked to implement replication partitions and boundaries, and that's exactly what we're providing
here. This is not the typical solution. This is a high security segregation solution that helps to put you
in charge of knowing which domain controllers are talking to which other domain controllers instead of
that being a dynamic process. Jason, anything to add to our discussion here of what is going on with
replication. Do you think we've covered that pretty well? Well I think that's...you've hit every single
scenario, I think, the folks might run into when trying to decide what sites or where to put sites. I think
one of the things to be aware of is that in today's world, it is much different than it used to be in that
we have high-speed networks, we have a larger networks. Networks are far more affordable than
they used to be, so there's a temptation that, consider, well I've got these two regional locations, but
they're so well connected, let's just create one site for them. And the only thing that I would step back
and say, well, you still might want to do what we're doing here on tab five and create two separate
sites because it's not only replication that we're concerned about, we're also concerned about user
logon and services. So there are other reasons to create a site here and that might be because we
want control and user authentication. We want them to locate a domain controller that's in their
region. We want them to locate a DFS server that's in their region, not just controlling replication. So
that's another factor to consider when you're thinking about creating sites, and you're looking at, here
are my well-connected networks. Don't think just strictly about replication; also be thinking about how

6 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

the users consume Active Directory services that are site aware. I want to seed it into that thought.
You know it's very...if you download a single group policy object across a wide area network link that,
typically actually, with a modern link will only take a second. There are a lot of networks that are
downloading more than one group policy though and when...if I've got, you know, 15 group policies,
that means every login might take 15 seconds. It's only one second each, but there's 15 of them,
right. And that's the weight that has to be brought physically to the client and utilized in order to get
the best logon, in order to get the correct logon experience to make sure it's defined within your
security environment. And so, what Jason just described, localizing the logon is just as critical, if not
more critical, than controlling the replication bandwidth with our, again, much higher speed
connections. We're not like that, earlier diagram, we're typically not connecting with 56k modems,
right. So like Jason said, you know it's good enough, right. We can do that, we don't need to worry
about a separate site, okay. But, especially as the number of users in a particular location may go up,
that can start to become a real concern about trying to log on across that link.

[Heading: Replication (Continued). The example shows a single forest with three domains existing in
three physically remote locations. Replication occurring within each of these locations is intrasite and
between the locations is intersite. The example then shows a single forest and a single domain with a
remote office. The single domain is divided into two sites. One site consists of Berkeley and San
Francisco and the other site consists of the remote office at Oakland. The example then shows a
single forest with three well-connected domains existing in the same location. In this example, the
three sites are required and they communicate with each other through intersite replication to
accommodate control over replication and to manage interdomain security and services in a single
forest.]

7 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Active Directory Site Planning and Design


Learning Objectives
After completing this topic, you should be able to
recognize the advantages of site topology design
recognize some of the site topology owner's responsibilities

1. Design a site topology


So here's the process, right, Jacob, that you just described, the scenarios and the different models
that we might adopt, but then we have those procedure that we go through, the kind of the thinking
steps about identifying what model is actually going to fit for us. So one of the first things we want to
do is collect network information. We spoke to that a little bit earlier about the importance of creating
a network map, identifying our network subnets, identifying our network speed so that we can
translate that into Active Directory objects. Remember, it's not a one for one. The network of map is
the physical dimension, and then what we do and our job as network designers is to translate that into
Active Directory objects that make sense for Active Directory. In that same course, in that same line
of thinking, we're examining our locations for where should we place our domain controllers. You
might have a branch office where there's no domain controller needed at all. There's no sense then of
making a site over there because there is no Active Directory service there. So the other aspect to
this design is thinking about where I'm going to put my domain controllers, and that could be, you
know, regional DCs, virtual DCs, I'm also thinking in terms of branch offices and where I'm going to
place that physical service. Next we're going to look at after identify my network map and where I
want to place my DCs. I've got the initial information to start creating my sites and so that's where I
step into the next procedure where I'm looking at those different models that Jacob just described and
I'm identifying what model makes the most sense in regards to replication, user location, and
authentication. And what I mean by user location is site aware applications, so users locating Active
Directory or applications locating Active Directory or users locating applications that are site aware.
So those are going to be those factors in considerations that contribute to my site design. Now, there
are other unique situations which might dictate that I need to consider how I'm going to do site link
bridging or there might be considerations in terms of what kind of replication settings I might enable or
disable. There might be considerations in terms of what servers are going to be global catalog
servers or, or not, or whether or not I'm going to turn on Universal Group Membership Caching.
These are all other factors that might alter parts of my site design that we're going to be talking about
next.

[Heading: Design a Site Topology. Designing a site topology consists of collecting network
information, planning domain controller placement, creating a site design, creating a site link design,
and creating a site link bridge design. The steps to be followed when collecting network information
are creating a location map, listing communication links and available bandwidth, listing IP subnets
within each location, and listing domains and number of users for each location. The steps to be
followed when planning domain controller placement are planning the forest root domain controller
placement, planning the regional domain controller placement, planning the global catalog server
placement, and planning the operations master role placement. The steps to be followed when
creating a site design are deciding which locations will become sites, creating a site object design,
creating a subnet object design, and associating subnets with sites. The steps to be followed when
creating a site link design are connecting sites with site links and setting the site link properties. The
steps to be followed when creating a site link bridge design are creating a site link bridge design for

8 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

disjointed networks and creating a site link bridge design to control Active Directory replication flow.]

2. Site topology owner tasks


So we've looked at some of the important considerations around sites. We've described what they
are, some of the important factors around why we would create them, when we would create them,
the different types of site models. But who can create them? So Jacob, my question to you is what
are some of the prerequisites to creating these sites? You know it sometimes surprises people that
creating an Active Directory site is on par with creating a new domain. So let's think about who can
create a new domain in an Active Directory environment? Well that takes an enterprise administrator,
right, because you're changing your forest model. You're adding a new domain. But adding a site has
that same level of importance because you are changing the replication potentially forest wide. You
could take any domain controller from any domain and locate it within a particular site, changing the
way that Active Directory user are going to log on and how replication is going to occur. And let's
remember something else and this is often a surprise to people, we said it like adding a domain, it
takes an enterprise administrator. You know what else it takes? Access to the Domain Naming
Master, that Flexible Single Master Operations Role, that is associated with site creation as well as
domain creation, so that had better be online and available to whoever is going through this process.
Now, procedurally, it's not that difficult if you have the credentials, right, I mean, it's a right-click and
wizard, you're on your way and, you know, it's certainly a very easy process to define a site and
attach it to a site link. But even the wizard will tell you once you're finished that process, you're not
done, right. You need to potentially define other links, site link bridges, the replication interval
schedules, adding subnets, adding domain controllers, controlling transports like IEP and SMTP,
managing global catalog servers, managing Universal Group Membership Caching, I mean, these are
all things that are associated with the initial layout and really, you know, it's one of those things that
honestly I...an Active Directory administrator should, in the best case scenario, should have gone
through this design and the implementation process already before they actually implemented, and
that's what I love about virtualization is you can go through all these steps and be ready to answer
every question that gets brought up in front of you in terms of getting the model working in a
virtualized environment. And if I'm a site topology owner, if you said, you know, you're responsible
forest wide for making sure localization of logons and replication is occurring at the optimum level,
then I think that's part of my responsibility. Now, I'm not necessarily going to be able to control the
bandwidths in the same way, although with some of the new bandwidth metering options that are
available in the new versions of Hyper-V, maybe I can. Maybe, I can actually even emulate wide-area
network links between two virtual machines in a Hyper-V environment. So that goes a little, you know,
off the beaten path there, but as a site topology owner, those are the kind of things I'm going to be
responsible for, and like I said, I got to have the credentials and the access to make that happen in
the first place. There is a reference here in this slide that I think is interesting. Manage site delegation
and security. You know it takes all this to set it up and to be able to manage these sites, but could I
hand over the ability to manage these objects after the fact. Well, if you look in the screenshot, you'll
see there's a Security tab, right. Just like on a file, there's a Security tab on a file that means you can
grant permissions to that file. You can see the Security tab on a site. Sure enough, that means you
could actually go in and change the...actually that's the site link. You can change the security
parameters of that site link and sites have that same ability, so I could say, you know what, you are
the IT guy who works with this branch office, and I'm going to delegate to you the ability to manage
this site link which your site is associated with and the headquarter site and so I could delegate that to
offload some of that responsibility from myself and I think the bigger your organization is, the more
people that are involved in it, the more that very careful and deliberate operations of site delegation
could be useful, but delegation doesn't mean you just put people in the enterprise admins' group so
that they can do your work for you, right. That's the wrong technique, that's not the least privileged
process by way you would go through that. One of the other questions that often comes up though,

9 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

and this doesn't take an enterprise admin is "can I trigger replication?" right, and we're going to be
talking about the ability to replicate and how that...what that actually looks like, right, foresting
replication, but Jason, that doesn't take an enterprise admin, right? No, that just requires the domain
administrative credentials. Absolutely, so and again, that ability is then going to be something that we
can manage per connection object even that we're controlling within Active Directory as we need to.
So we're looking again. This is kind of a placeholder. We've already referenced the idea. We want to
be thinking about our site topology. Well, here are some of the elements we want to have in mind and
we're looking forward, in this slide, to some of the things that we're going to be dealing with in terms
of site-level management.

[Heading: Site Topology Owner Tasks. Site topology owners are responsible for managing site
topology architecture; managing network connections and routers; managing domain controller
placement; managing the Active Directory intersite replication schedule; creating and defining the site
links, intersite transport protocols, and site link bridges; managing bridgehead servers and server
specification and capacity; determining replication failover options, determining global catalog
servers, and determining Universal Group Membership Caching (UGMC); managing site delegation
and security; and manually replicating when necessary. The HeadQuarters-Branch Properties dialog
box is displayed. The dialog box has four tabs – General, Object, Security, and Attribute Editor. The
General tab is open. The header in the General tab reads, "HeadQuarters-Branch." Below the header
are the Description text box, the Sites not in this site link section, the Sites in this site link section, the
Cost spin box, the Replicate every spin box in minutes, and the Change Schedule button. The OK,
Cancel, Apply, and Help buttons are at the bottom of the dialog box. The two entries in the Sites in
this site link section are EasyNomadTravelSite01 and EasyNomadTravelSite02.]

3. AD replication components
Sites and services do not exist in a vacuum, right, they're going to be integrated with all of the other
elements. And just like every other aspect of Active Directory, they're going to be dependent upon
other protocols like DNS for being able to look it up and then it's primarily based upon an RPC over
TCP/IP-based connection, though it can also be based upon e-mail driven secure SMTP delivery of
some of the data, but only in very, you know, confined purposes. And of course, we know this is
associated with the Net Logon service which will be doing some of our registration of records and
those elements. So the way that sites and DNS fit together and the different objects that you build
and the way that the site links work, again, that's best seen in a, I think, in a demonstration where we
can see these moving parts and how they fit together. So let's take a look at this interactively, and I
think it'll make a lot of sense and the factors that you're seeing here will make sense as well.

[Heading: Replication Components. The different services required to enable site functionality include
DNS service, RPC/IP service, SMTP/IP service, and Net Logon service. In the illustration, a main
domain controller is connected to five domain controllers. These five domain controllers are arranged
in two sets. One set contains three domain controllers and the other set contains two domain
controllers. The set with three domain controllers has two subnets as 172.20.118.0/24 and
172.30.68.0/24. The set with two domain controllers has two subnets as 192.168.0.0/16 and
192.168.23.0/24. WAN links the two sets. Both the sets have a two-way RPC/IP connection between
the domain controllers. Intrasite replication happens inside each of the two sets. The two sets have a
two-way RPC/IP or SMTP/IP connection. Intersite replication happens between the two sets. Intersite
messaging enables SMTP replication and Net Logon site coverage calculations. Net Logon registers
DNS SRV site-specific resource records. A three-column table stating the differences between
intrasite and intersite replication is also given. The table contains four rows. The three column
headers are Attributes, Intrasite, and Intersite. The first row states that there is no compression in
intrasite but intersite. The second row states that the interval is automatic and configurable in intrasite
and scheduled and configurable in intersite. The third row states that the connection is all domain

10 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

controllers dual ring in intrasite and is according to site link cost in intersite. And the fourth row states
that the protocol is RPC over IP in intrasite and RPC over IP or SMTP in intersite.]

11 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Active Directory Site Links


Learning Objectives
After completing this topic, you should be able to
describe the key site link settings that define replication traffic
identify the utility that can be used to examine replication hardware

1. Site links and AD


So you just had a look at some of the important replication components of the sites, site links, and the
subnet objects, and we talked about those services that contribute to replication. Now, what we want
to do is we want to talk about some of the problems that you can solve based on careful design
around your site link parameters. In other words, we want to talk about how you control replication
between the sites and some of the factors to be thinking about. So what I want you to do is assume in
this diagram here that in every one of these sites, there's a domain controller. I also want you to
assume that this is just one domain, okay. So we have no redundant paths and no redundant paths
as far as the intersite replication's concerned because we're using the spanning tree algorithm. So,
Jacob, here is my question for you. We've got a domain controller in the shipping site, so we know
that there is this component called the ISTG who determines who it's going to replicate with. Does the
domain controller in the shipping site replicate with the domain controller in research or does it create
a replication relationship with the DC in headquarters, and of course, the important question is why?
Well we see that shipping has a site link directly tying itself to both of those sites. And so the initial
thought might be, well then it's going to form a bridgehead server that establishes a connection with
the bridgehead server in both of those sites, but got to remember that the Intersite Topology
Generator running at each site coordinating with the other sites is responsible for ensuring that there
is a one best path that can deliver traffic throughout the entire Active Directory forest for any
replication partition. That means is that although redundancy can be built in, it is not going to be
utilized except in a failover situation. So the shipping site is only going to build an actual connection
object between one domain controller and another in the lowest cost situation. That's why we ascribe
cost to these links is so that we can indicate to the Active Directory, Intersite Topology Generator,
preferences of the Active Directory flow of information. How should the Active Directory information
disseminate itself? And shipping has a 200 cost with headquarters and a 1000 cost with the research
site, alright. Which of those costs is going to be cheaper for getting to that headquarters site? It's
going to be the 200 and so that's what the ISTG is going to use to establish that connection. And then
we can see headquarters has it's...has the ability to connect to research cheaper than anybody else
with the cost of 300, and headquarters has the ability to establish the cheapest cost with production
with 200. So, essentially, we end with headquarters being a hub site having bridgehead service with
the other three sites, and the other three sites having only one bridgehead server back to
headquarters. Does all the information have the ability to replicate to everyone? Yes, and because of
the way that we created those site links, this behavior can occur, right. We didn't string this out into
one big horseshoe. Instead, we've designed it with our site links to have this hub-and-spoke protocol.
But we do have that additional link going across the top of the cost of a 1000, but that link, although
present, does not necessitate that the connection object is going to be used. It means it is possible to
use it as a point of flow with the set of instructions, right, in interval and a schedule that's associated
with that cost to allow for connection object between shipping and research, but it doesn't need to be
used because there are cheaper cost available, right. So what would happen though if the domain
controller or all the domain controllers in headquarters were unavailable, so that those shipping site

12 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

domain controllers cannot directly talk to them anymore? Well that's a great question. I appreciate the
fact that you did specifically say all the domain controllers were down, right, because the ISTG would
just pick an alternate bridgehead server if only...if it only the...the one bridgehead server that was
functioning went down initially. So, in the case where they're all down, right, and so there...it
simply...none of the domain controllers are available. Well now, shipping sites ISTG is going to be
faced with essentially a task of recalculating, well, who can I replicate with, right? Changes are going
to be made. I need to get them out the door to other sites that are online. Now, at this point, we have
to remember that, by default, all sites have something called site link bridging enabled, and site link
bridging is there to support essentially crossing over a site that is dead for whatever reason. It can be
dead in one of two ways, either it's dead because it doesn't hold the partition, right, maybe shipping
and research share a domain, headquarters is a different domain, and I have to replicate essentially
through bridging through headquarters to get from shipping to research, or it's dead like we see right
here and headquarters is just offline for our domain, but site link bridging says it's okay to aggregate
the cost of going through a site, come up with a cumulative value, and to enable replication based
upon that value. So, in other words, we have a 200 link to headquarters, a 300 link to research.
Because headquarters is offline, shipping could form a connection to research or, keep looking,
shipping could establish a connection to production. Let's look at that cost. That's 200 to get to
headquarters, 200 to get to production, so that ends up being, right, the cheaper cost in this case,
and so, shipping is going to try and establish a connection with the production site because that has
the lowest cost of any of the site link bridges and then, in those, all the bridges still have a lower cost
than that site link that is directly available between shipping and research. So, although that's been
set up, it has such a high cost and it's still not being called upon in this case. So a connection object
can be built with a nonadjacent site if site link bridging is enabled. Now, site link bridging had been
disabled, but then, the only game in town would be to use that 1000 link and follow the rules and
regulations, the interval and schedule, based upon that 1000 link between shipping and research.
Does that make sense?

[Heading: Site Links. The example shows four sites – the shipping site, the research site, the
headquarters site, and the production site. The cost for replicating the domain controller in the
shipping site with the domain controller in the research site is 1000. The cost for replicating the
domain controller in the shipping site with the domain controller in the headquarters site is 200. The
cost for replicating the domain controller in the research site with the domain controller in the
headquarters site is 300 and the cost for replicating the domain controller in the headquarters site
with the domain controller in the production site is 200.]

It does and you...the cost, of course, are going to be an important factor in the ISTG's calculations.
But you also mentioned that a schedule's applied to these links, how does the schedule affect the
determination? Well you know it's possible, maybe, let's imagine all of these hub links, 200, 300, and
200 down there, let's say they only applied Monday through Friday, and the weekends weren't
available for replication with the headquarters site because we do massive, you know, deployments of
patches and other things which you don't want any replication going on during that time. Well, if you
were to do that and the 1000 link up at the top was a seven days a week schedule, was available at
any time. Then, at that point, that link would have the ability to kick in and replication could be
established between shipping and research, and it would be, again, based upon the interval of
whatever that particular site link was based. Probably, maybe, they only replicated once every 12
hours, but they will replicate during that weekend interval because it's available in the schedule when
the normal schedule for connection objects was not available. So you know, they essentially
over...when they overlap, that's when the costs really come into play. If they don't overlap, if it's one
schedule or the other, then I use what is ever available within that window of time. So that's going to
kick in at that point. So site link bridging is the ability to replicate through a site to another site and the
factors that determine those replication partnerships is going to be the cost of those links and the

13 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

availability of those links, the schedule of those links. What happens if you disable site link bridging?
If we disabled site link bridging, then we're going to either have to manually create bridges or
explicitly create site link objects describing every connection between every site in order to enable
replication flow. That may be fine, but that would mean that if there are any breaks in that connection
that if there's no site link bridges, right, if we've got a link of chains and that's really what it becomes
and we have link of chains, then any link breaks, then you got two isolated information silos that
cannot reconnect until we get that intermediate site back and up online. Now, with this configuration
that we see right here, you know, if research or production or shipping was to be lost temporarily
where everything was going through headquarters anyway, so it's no big deal. But as we said, if the
hub site of headquarters was lost, then, you know, production way down there in the bottom gets left
out in the cold unless I manually were to create a site link bridge that allows for the production to
headquarters' site and then either the shipping or research site to headquarters that, the top links and
the bottom link, to have a bridge to enable that connection, so we don't typically want to disable site
link bridging unless my goal is to say, you know what, because I know that it's going to fail because of
firewalls and other things or the type of wide area network that I have. I know certain links physically
won't make sense, then I can disable site link bridging and explicitly create only the links that are
physically possible. In other words, if a domain controller in shipping tries to reach a domain controller
in production and we know that's just never going to work because that site has been restricted by
firewalls or maybe that's a research site, then we could disable site link bridging and enable only the
bridges that would be valid according to the firewalls. That's going to end up with a better performing
Active Directory infrastructure. Again it's the exception rather than the rule. I love site link bridging
and the fact that it means links are used, bridges and aggregate costs and combined schedules can
be used, and then the opportunity to replicate is always done according to that spanning tree
algorithm principle where we find one best way for all sites to replicate to all others, but we can
piggyback over dead sites if necessary. And I think that's a good point. And I think what you brought
into your answer that I just want to emphasize is that "why would you ever consider disabling site link
bridging?" And in a nutshell, it's a situation where you have networks are not fully routed or a situation
where you might have DCs behind firewalls as you described in that particular scenario. So Jacob,
one last question, how would multiple domains affect this? Right, we've been kind of assuming a
single domain model for all of this, right. Well, let's say that, as we said if headquarters was of a
different domain, then all the branch locations, right, production, research, and shipping belong to
domain B, headquarters belongs to domain A. Well then, you have to consider the schema, the
configuration, and the global catalog, and follow all the rules that we've already described. But now,
we've got a partition of Active Directory that is not going to necessarily be contiguous. It's not shared
throughout everywhere and so why would shipping ever replicate to a headquarters' site domain
controller data that it doesn't need? And the answer is it won't. It won't do that at all, so that's where
the site link bridging comes into play. So, for shipping to replicate Active Directory data, it could use a
link of a 1000 to communicate with research, 500 to communicate with research through the
headquarters site link bridge assuming that's available, or 400 to communicate with the production
site. So again, it becomes as if the headquarters' site was offline with regards to that data partition. It
really is as if it just wasn't there, so site link bridging comes into play. The costs are compared, in
addition, compared to any just standard site link. There is no preference over a site link, over a site
link bridge. It's a cost, whatever has the cheapest cost and will determine the links that are formed
with best links with the spanning tree algorithm per partition, alright. Those of you who worked in the
switching world who are familiar with Per-VLAN Spanning Tree Protocol, this is Per-Domain Spanning
Tree Protocol, that is being used across the different sites.

[Heading: Site Links (Continued).]

2. Demo: PowerShell replication options

14 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Let's take a look at Active Directory replication options via PowerShell. So, here in the PowerShell
environment, what we're going to do is just execute a couple of scripts that'll give us an opportunity to
see what options are available here in the PowerShell environment. So, obviously, a lot of times, you
just want to start off with something safe where you can get some information. And we can Get-
ADReplication information in a lot of different ways. We can get information about sites, site links, site
link bridges, replication failures, there's a ton of query components related to the replication process.
They require filters, but if you just want to enumerate all of them, then we can simply throw in the
asterisk there and you can see it's now enumerating all three of the sites that I have – Portland,
Phoenix, and Boston. When we create connections, remember, between our sites using site links, so
let's take a look at that. Here is our site link. Of course, if I don't put in the filter, then it says, "wait, fill
that property in for me." And we have the cost, the replication schedule, the replication interval, and
the names of those particular site links that I've got established here, okay, these IP site links.

[The PowerShell command prompt window is open. The first line in the command prompt window
reads PS C:\Windows\system32>. In this line, the instructor types Get-ADReplicationSite –filter * and
presses the Enter key. The command prompt is populated with entries about all the three sites –
Portland, Phoenix, and Boston. In the last line of the command prompt entry that reads PS
C:\Windows\system32>, the instructor types cls to clear the command prompt window of all entries.
The first line of the command prompt window again reads PS C:\Windows\system32>. Next to this
line, the instructor types Get-ADReplicationSiteLink and presses the Enter key. The command prompt
window is populated with three new lines of entry – cmdlet Get-ADReplicationSiteLink at command
pipeline position 1, Supply values for the following parameters, and Filter. Next to the line that reads
Filter, the instructor types * and presses the Enter key. The command prompt window is populated
with the Boston-Portland and the Phoenix-Portland site link details.]

Now, remember, if we have disabled site link bridging, then it may be relevant to get the site link
bridge information where you can see, currently, we don't have any. So let's try building a new site
link bridge. That's going to be new-adreplication and you can see we can build site links, site
link bridges, and again, I love the ISE-based PowerShell environment, really great for bringing up the
various different options, which you're going to need to fill out even if you're not using the console, the
script console on the right. So let's go ahead and do the site link bridge. I'll need to give it a name. I'll
call this the PhoenixToBoston site link bridge. And then is the
–InterSiteTransportProtocol. Is this going to be IP or SMTP? It's going to be an IP base,
the standard type, and then –SiteLinksIncluded. We want to bring in the Boston-
Portland link and the Phoenix-Boston link, okay, those two components.

[The PowerShell interface is displayed. The last line of the command prompt window again reads PS
C:\Windows\system32>. In this line, the instructor types Get-ADReplicationSiteLinkBridge –filter * and
presses the Enter key. The last line of the command prompt again reads PS C:\Windows\system32>.
Next to this line, the instructor types cls to clear the command prompt window of all entries. The first
line in the command prompt window again reads PS C:\Windows\system32>. In this line, the
instructor types New-ADReplicationSiteLinkBridge –Name PhoenixToBoston
–InterSiteTransportProtocol IP –SiteLinksIncluded Boston-Portland, Phoenix-Boston and presses the
Enter key. The command prompt window displays an error.]

That should be Phoenix-Portland. Get these correct. There we go, name it correctly, and
everything works like it's supposed to. So now, if we run that earlier script to enumerate the bridges
here, we have the PhoenixToBoston bridge. Here is its name, it's a bridge, unique ID, and the
site links that are nested within it. Just to show once again with PowerShell, the ability that you have
to work some mass effect, I'm going to get all the bridges that I currently have and remove them. So
get all the bridges because the filter is asterisk and then I'm piping that to remove the site link

15 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

bridges. Says, "are you sure?" and I am. Easy enough and so, once again, if we go back to just listing
out all the bridges, we're back to a clean slate.

[The PowerShell command prompt window is displayed. The instructor changes the entry in the
command prompt to rectify the error and the command prompt entry now reads PS C:\Windows
\System32> New-ADReplicationSiteLinkBridge –Name PhoenixToBoston –InterSiteTransportProtocol
IP –SiteLinksIncluded Boston-Portland, Phoenix-Portland. The instructor then presses the Enter key.
The last line in the command window again reads PS C:\Windows\system32>. In this line, the
instructor inputs Get-ADReplicationSiteLinkBridge –filter *, presses the Enter and information about
the Phoenix to Boston site link bridge is displayed. The last line in the command prompt window
again reads PS C:\Windows\system32>. In this line, the instructor types Get-
ADReplicationSiteLinkBridge –filter * I Remove-ADReplicationSiteLinkBridge and presses the Enter
key. The Confirm dialog box appears. The standard text in the Confirm dialog box reads, "Are you
sure you want to perform this action?" The five buttons in the dialog box are Yes, Yes to All, No, No to
All, and Suspend buttons. The instructor clicks the Yes button and the Confirm dialog box disappears.
The command prompt window is again displayed. The last line in the command prompt window again
reads PS C:\Windows\system32>. In this line, the instructor types Get-ADReplicationSiteLinkBridge
–filter * and presses the Enter key. The last line of the command prompt window again reads PS
C:\Windows\system32>.]

So Active Directory, PowerShell management for replication, certainly well within your grasp to be
able to create, if necessary, very complex structures and to rebuild anything that's necessary if
something should go awry in your administration.

[The PowerShell command prompt window is displayed.]

16 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Domain Controller and Global Catalog Placement


Learning Objectives
After completing this topic, you should be able to
recognize the factors that should be considered when planning the deploying of
remote writable domain controller
describe the directory-related tasks supported by the global catalog service

1. Domain controller and service placement


Again we've been focusing on replication and sites, and the idea of placing our domain controllers out
there in the network and enabling support for localized logons and Active Directory integrated
applications, right, that's the beauty. But you see these questions here, we're not done, alright, we
have to also consider, you know, is the site even worthy of a domain controller? If it is worthy of a
domain controller, is it worthy of being a global catalog domain controller, or is it the flip side, is it
tentatively unsecure environment that necessitates a Read-Only Domain Controller? And then, we've
got these Flexible Single Master Operational Roles that we start off with, right, and we need to make
sure that those are accessible and so are they accessible to the administrators who might be using
those functions or to the domain controllers that are using those functions? Have we enabled enough
domain controllers to support all the users in a particular logon or services or applications? And then,
don't forget DNS, you know DNS can piggyback with the Active Directory domain partition, but it
doesn't usually. Usually, it's a part of the domain DNS partition or the forest DNS partition. That's a
whole another replication boundary, and it's going to follow all the same rules as everything we've just
described. It's going to have to utilize sites, site links, site link bridging in order to enable the
replication of your DNS information, especially when we enable that forest wide for the MSDCS
partition or the other Active Directory domains that you choose to propagate for local name resolution
to be available throughout the enterprise environment. So these are important questions that are all
driving. Which domain controllers are located where? Did I have them in the right spot? So that I can
ensure that I have a continuity of my business throughout working with Active Directory.

[Heading: DC Placement. There are several questions that should be asked when placing domain
controllers in the network. These questions are as follows: Where are the forest root and regional
DCs needed? How many DCs are at a location? What is the location of the FSMO, the global catalog,
and the RODC? And is the AD integrated with DNS?]

To help answer the question about domain controller placement, we've got this little diagram here to
ask you some of those key questions and I really like this because this gets to the heart of the matter.
The heart of the matter when asking does this location need a DC or not? Well it has a lot to do with
that WAN link. You've got users in that location, it's a remote location. Can they effectively, frequently,
and reliably authenticate to a domain controller in a remote location in that central office? If the
answer to that question is no, the WAN link is not reliable, it falters, or it's not sufficient enough in
order to handle those kinds of frequent authentications. Then we're in a situation where we have to
kind of decide we need to put a DC and make it close to them. By proximity, they'll be able to
authenticate to a local domain controller whether that WAN link is available or not, and they can
continue, of course, with the mission, with their job. Now, if we're going to put a DC in that location,
the next set of questions are really important, and that has to do with whether or not we're going to
put a writable domain controller in that location or we're going to resort to using a Read-Only Domain
Controller. Now, the difference between these two is a writable domain controller can contain

17 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

sensitive information. It can be changed, might have administrative passwords, and application
secrets whereas a Read-Only Domain Controller filters out many of those secrets and doesn't store
any passwords by default unless an administrator elects to allow the branch office users to store their
passwords. So that's really effective because the Read-Only Domain Controller can minimize the risk
and the exposure to Active Directory. If, for instance, that DC walks out of the room for some reason,
you know, someone breaks in, steals the server, they steal a writable domain controller. They've got a
lot more information. So what determines if I'm going to place a writable domain controller or a Read-
Only Domain Controller? Well 90% of the time, you want to put a Read-Only Domain Controller, but
the reason why you would consider a writable is if you have an application at that location that
requires a writable domain controller, like Exchange. So, let's say we have an application that
requires writing to a domain controller, then we start looking at the other questions in this diagram.
And here's the problem, the more we...the further we go here, the more expensive it gets. In other
words, when we start installing a writable domain controller, other big questions come up like, okay,
so if I'm going to put a writable there, what about that physical security? I mean that's why I would
elect to use a read-only as an alternative. I can't. I have an application that requires it, so what about
physical security, can I secure it? If no, then you've got an important decision to make. Do you
upgrade that WAN link to allow users to authenticate? Do you address the problem with additional
investment in physical security? So these are all going to be things that are going to cost additional
money. Now, if it can be physically secured, the next question is how are you going to administer it?
Can you administer remotely or do you have a local IT staff there that can actually administer that
writable domain controller? So these were all important questions that really kind of lead the further
you go with the writable domain controller, the more expensive it gets. But it might be important to
place a writable domain controller if you have application dependencies. It might be especially
important to put a Read-Only Domain Controller so that users at that location can authenticate locally
and that's going to mean a much better logon experience for them, and it means that they're going to
get right back to work.

[Heading: DC Placement Decision Tree. An example of a domain controller placement decision tree is
displayed. The decision tree example begins by asking if the incidence of WAN failure is frequent
enough to warrant a writable domain controller. If no, then we need to find out if the performance over
the WAN link is acceptable. If the performance over the WAN link is acceptable, then no domain
controller is needed. If the performance over the WAN link is not acceptable or if the incidence of
WAN failure is frequent enough to warrant a writable domain controller, then we need to find out if
there is a directory-enabled application that requires the presence of a writable domain controller. If
there is no directory-enabled application that requires the presence of a writable domain controller,
then we can place a Read-Only Domain Controller at the remote location. If there is a directory-
enabled application that requires the presence of a writable domain controller, then we need to know
if the writable domain controller can be physically secured. If the writable domain controller cannot be
physically secured, then we need to remove the application, provide physical security, and upgrade
the WAN quality. If the writeable domain controller can be physically secured, then we need to know if
the remote writable domain controller can be administered remotely or if there are sufficient local IT
skills. If the remote writable domain controller cannot be administered remotely or if there are
insufficient local IT skills, then we need to provide the required local IT skills. If the remote writeable
domain controller can be administered remotely or if there are sufficient local IT skills, then we can
place a writable domain controller at the location.]

2. Global catalog design


Now the Read-Only Domain Controller and a writable domain controller isn't the only domain
controller role that we might be concerned with. Another domain controller we're concerned with, in
terms of placement, is the global catalog server. So I just got done talking about DCs in general.

18 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

What about this role called the global catalog server? Jacob, why do we care about the GC? The
global catalog server, right, if you remember this guy, he's your big book of everything, alright. If you
contact a global catalog server, the idea is that this is going to be a server that maintains, of course,
the schema, the configuration, and its own domain partition. They all hold that. But additionally, it will
hold the partial attribute set of all of the objects of remote domains. Now, you can see here that, that
means that if the native domain controller is a domain A and it's got every attribute of every objects in
domain A, but it will have the partial set of attributes of all the objects in domain B and C, but it's not a
single master operation role. This is a searchable indexed catalog to discover objects, right, to be
able to say with authority, this object does or does not exist forest wide. That's why we build one of
these. So, if I build one and take a standard domain controller elevated to being a global catalog
server in domain C, well then, it will have all domain C's objects, of course, and the partial attribute
set of A and B. Now, when does this come into play? Again as we said, when you do an Active
Directory search using administrative tool or searching for printers or something like that and you
search not the name of the domain, right, it's not something dot com, right. Instead, you are searching
Active Directory or entire directory. What that means is that the focus has been placed on the global
catalog server, and that your results set could come from any domain, right, it's a universal search
and that's great. Also one of the things that we can sometimes do to simplify a multidomain tree is to,
although, we have multiple domains for administrative reasons, still consolidate to a single e-mail
addresses and server management system, and we can use the tool of a UPN, right, that's a logon
name that looks like an e-mail address and that UPN extension might be universal. We all use the
same extension, typically, the one that matches our e-mail convention. That means that logons are
simpler. Everyone's familiar with it. It works on the web. It worked locally, great. But if we all use the
same extension, the, you know, I'm Jacob.moran@, right, the name of our company, well I don't know
if that's domain A, B, or C. It's our universal e-mail extension. It could be whatever, so the great thing
is, though, when you sign in with a UPN, that is used by the computer's local domain controller, and
then that local domain controller will then see if that's one of the local objects. If it's not owned by a
local domain, then it contacts a global catalog server and says, "hey I've got this UPN, this Universal
Principal Name, you tell me what domain does this account belong to if any?" Then that global
catalog can provide the redirection to the domain controller to the correct domain to get that user
logged on. Then also, the global catalog is used by the Infrastructure Master to make sure that if
there are changes to names in any particular domain, other domains only capture shadow objects
relative to that name, and the job of the Infrastructure Master is just...is to inventory any essentially
cached shadow object references to security identifiers from remote domains and see if they're still
accurate. What is it tested against? The global catalog, right, one stop shopping for all your shadow
names, and probably, most importantly, for day-to-day administrative maintenance. When a user logs
on, it's important to validate that they've got the right name and password, right, that's the
authentication. But then what comes back to that local system is not just that user's security ID, that's
just the first piece. They also get their global group membership, their domain local group
membership, and their universal group membership so that any rights and permissions that are
associated with any of those group types can be made available for that user, right. If you don't have
the associated group identity, you can't use the group. But here's the problem, if I log on to domain A,
a domain A standard domain controller is only going to be aware of the global and domain local
groups within that domain, right. It is not going to be aware of the universal group membership that I
might belong to in domain B or domain C. I mean I might belong to universal group in domain B that
says that is been assigned in a group policy, the right deny logon locally to this set of desktops, right,
that better be figured out before I ever touch a Start button. And so that is done through contacting
the global catalog because the global catalog is aware of all universal groups getting from any
domain in the forest. And so when I log on to a domain controller, if that domain controller is not a
global catalog, it will contact the global catalog, and find out if my user account belongs to any
universal groups. If so, inventory and add that to my access token as it hands that to me and the
client to build and, therefore, any rights that I'm given, any permissions that I'm given or denied will be

19 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

found out at first login before I ever see anything else. That is so important that if a global catalog
server cannot be found at logon, logon fails. This is why the placement of the global catalog is so
important. If I don't have a local GC, well okay, maybe searching for Active Directory objects takes a
little longer, maybe my logon time might be a little longer in searching for a UPN if it's not in the local
domain because it has to go connect to a remote one. There's a little more wait to what is going back
and forth, potentially if we use universal groups in our domain environment and again, if that global
catalog server is simply offline, if I'm in a remote location and the network link is cut, goes down, and I
have no local global catalog server, well then, I can't log on with the UPN. And if I figure out what my
classic logon convention name is that I don't use anymore, well then it will still fail because I can't
validate universal group membership. So these guys are absolutely critical to the logon process and
you cannot be an autonomous location unless you support access to a global catalog. Would you say
that's true or am I going too far there, Jason? No, I think that's true. In fact, let's talk about global
catalog placement right now.

[Heading: Global Catalog. Global catalog finds Active Directory objects, hosts user principal names,
validates object references within a forest, and supplies Universal Group Membership information in a
multiple domain environment. In the example, three domains, domain A, domain B, and domain C are
displayed. If domain A is designated to be the global catalog server, then domain A will have all the
object attribute values of domain A and the partial attribute sets of domain B and C. If domain B is
designated to be the global catalog server, then domain B will have all the object attribute values of
domain B and the partial attribute sets of domain A and C. If domain C is designated to be the global
catalog server, then domain C will have all the object attribute values of domain C and the partial
attribute sets of domain A and B.]

We have another decision tree here to help you kind of determine how to place or when to place a
global catalog server at the local location, if I can say that. So imagine, again, you got multiple offices.
The question, you know, that comes up is, does this particular location need a GC or not? Not every
domain controller is necessarily going to be a global catalog server. So some of the important factors
here, Jacob talked about the importance of a global catalog server, the role that it plays. It's especially
important in multiple domain environments; especially important if you have a forest that includes
additional domains or objects. It can actually be queried from across those trusts. So I want you look
at this tree and identify the endpoints here. There are basically three possible outcomes. One, you
place a domain controller with a global catalog at that particular location. And number two, you don't
place a global catalog server at that location, and it could mean that's still a DC there, but you're not
using the global catalog feature there or you're actually placing a domain controller that's not a global
catalog server, but to address the issue that Jacob talked about in terms of universal groups, you're
going to enable a feature called Universal Group Membership Caching. It really is kind of an in-
between having a GC there versus not having any information at all. So what would determine putting
the global catalog server there? Well first of all, do you have an application that really needs to
consume global catalog information, so you got a large forest? You have an application of that
location like Exchange who needs to query the global catalog frequently, well then, you need to have
a GC. Like Jacob said, that's critical. Another important question here that's going to dictate putting a
GC there is just the sheer number of users. Do you have a population that just demands frequent
access because the assumption here is if you've got a lot of users, then you're going to have a lot of
queries. Another important question is that WAN link again, how available is it, how reliable is it, does
it support frequent authentications? In other words, would it allow the users to query a global catalog
server over that location over, excuse me, over that WAN link. You know if it, you know, we don't have
an application that requires a GC, you don't have a lot of users, you have a reliable WAN link, that's
the only time that you would not consider a global catalog server. And then the last one has to do
with, okay, so I don't have a great reliable link. I don't have a lot of users. I don't have an application,
but the users I do have require access to a global catalog server so that they can authenticate. So we

20 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

got new people showing up and they need access to that universal group membership information,
where are they going to go to get that? Well if the WAN link is not available, one solution is to put a
global catalog server in that location, but here's where the alternative comes in. You can have a
regular domain controller provide the universal group information without the additional burden that a
global catalog server bears. In other words, global catalog server, remember, indexes information
across the forest. So, in a multidomain environment with lots of information, the global catalog
server's burden is going to go up because all of that information has to be replicated. If you look at
the actual schematics, if you will, of Active Directory, remember AD is partitioned, so Active Directory,
it can basically is made up of a configuration partition, a schema partition, and then the domain
partition. All of those containers contain important object information. In the global catalog server,
there's additional partitions for every domain that it has to keep track of. Now, Jacob was pointing out,
it's not a full copy of all of the other domain controllers' information, otherwise it would just be, you
know, a DC of that other domain if you...but it contains frequently searched information. Nevertheless,
it's an entirely separate additional partition that has to be replicated. So to minimize replication over
an unreliable link where universal groups are still needed, did you get that? In order to minimize
replication and yet still provide universal group membership, you can enable what's called Universal
Group Membership Caching, that's a mouthful, but it simply means you got a domain controller at that
location. It's not a global catalog server, but universal group information is cached. So when a user
logs on the first time, they send a query across the WAN link and find a GC, retrieve information
about the universal groups that they might be a member of, maybe none. So if you minimize the use
of universal groups is actually also helps with this, but if they are a member, it comes down and it gets
cached on their local DC and subsequent logons can occur without interruption because that
universal group information at that office has been cached. The only time that's really not a good
option is if you've got users from around your forest, from all the various domains, and they visit this
branch office, and they are roaming users, but that means you have a lot of people going across and
always asking the GC on the remote side of a WAN link for a universal group, and so then, you'll be in
a situation where it's not going to be a good experience for those roaming users. So those are a lot of
factors to think about, but ultimately it comes down to, to GC or not to GC. You have to have a GC
most of the time, right, to support those good authentication experiences, and I think it's worth
mentioning, Jacob, that if I have a single domain, if I have a single domain, and I have a single forest,
then this problem about to GC or not to GC is easily answered for me. Why is that? What does it do,
right, if you make it a GC? It says, "oh, let me take on the extra replication weight and burden of
storing the partial attribute set of other domains in the forest. Well if there are no other domains in the
forest, then there is no wait. There's just a flag in Active Directory that says, "I'm a GC, if you have
any questions you can ask me and I know it all." It takes care of the responsibility without adding any
extra burden and so in a single domain forest, you make every server a global catalog, end of story,
and, in fact, in Server 2012 for the first time, Microsoft has started the Active Directory installation
process by assuming that you want that checkmark for global catalog enabled not just on the first
domain controller in your forest, but on every domain controller that you install and this is why most of
our Active Directory environments, right, the vast majority are single domain environments where this
is not going to add any burden, and it's going to greatly simplify and enable the applications that...like
Exchange, right, the global address list, looking up for these accounts, the local out...logon process,
even the remote locations to just say, "okay good, you're global catalog, I don't have to go anywhere
else, end of story." When we have multiple domains within our domain controller...now we have
to...within our forest, then we have to weigh that and say if I add the global catalog, I increase
replication, but I decrease logon time, right, because the data is now closer to the user. If I don't have
it be a domain controller, I decrease the amount of replication going on, right, and there's less burden
going on in that particular DC, but now, I've got to go to that remote site to find a global catalog or to
that other server to find that global catalog and get the information I need. That's why I like the
Universal Group Membership Caching because, typically, in many situations, that is going to be one
of the core needs that the global catalog provides. It's easily distributed through that cached model. I

21 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

think that's a great solution for a lot of networks, but, you know, the exchange global address book,
being determined and relying on access to the global catalog to get that information, it's a great
example of a Active Directory integrated application. A lot of folks have it, and so I'm ensuring that,
that global catalog is adjacent to the users who are going to be doing that search is going to be
extremely important.

[Heading: GC Placement Decision Tree. The GC placement decision tree is displayed. The decision
tree begins by asking if there is any application that needs a global catalog server running at the
location. If no, then we need to ask if the number of users at the location is greater than 100. If the
number of users at the location is greater than 100 or if there is an application that needs a global
catalog server running at the location, then a global catalog server can be placed at the location. We
also need to ensure that the server does not host the infrastructure master role in a multidomain
forest. If the number of users at the location is less than 100, we need to know if the WAN link is
100% available. If yes, then we need to find out if the logon performance for the roaming users is
acceptable. If it is acceptable, then do not place a global catalog server at the location. If the logon
performance for the roaming users is not acceptable, then we can place a global catalog server at the
location. If the WAN link is not 100% available, then we need to know if many roaming users work at
the location. If there are many roaming users that work at the location, then a global catalog server
can be placed at the location. If not many roaming users work at the location, then place a domain
controller at the location and enable Universal Group Membership Caching.]

Here's a look at a concept we talked a little bit about already. It's called Universal Group Membership
Caching. Remember a universal group is a security group that really is kind of forest wide. It's a group
that can contain members from any domain and it might be useful if you have a forest with multiple
domains and you need to represent users for the entire forest, so like a forest wide sales group or
forest wide marketing group or forest wide research group or something like that. Now, there are
several things to be aware, however, in regards to universal groups. If you use them, users need to
know about whether or not they're part of a universal group at logon and that means global catalog
access. So, a user during the authentication for the first time, they're going to query the global catalog
server to locate a universal group. Now, if you're in a situation where you've got a lot of remote
offices, the question comes up and that is, well, can I put a global catalog server here? If I put a GC in
these remote offices, well, those universal groups, as well as all of the other information that a global
catalog server has a burden for gets replicated over that slow link to the remote location. To kind of
ease the burden of replication over that link, instead of putting a full-blown global catalog server there,
you can use a standard domain controller, so it will still replicate domain-specific information, but then
you can enable universal group caching and what that means is local users who authenticate to that
domain controller. Well the first time they do that, they'll find out what universal group they belong to,
but then it gets cached on that local domain controller. So that simply eases the burden a bit on that
WAN link. And that's all good and well provided you have a small number of users there and you're
using universal groups. If, however, you've got other reasons to put a full-blown catalog server there
like the frequent access to the global address list or you've got roaming users or you've got more than
just a handful of users, well, that might actually change your design a bit.

[Heading: Universal Group Membership Caching (UGMC). In the illustration, the two universal
groups, ug.ad and USA.ug.ad trust each other. The ug.ad group contains three domain controllers,
DC02, DC01-GC, and DC03. Slow link, WAN separates the DC03 domain controller from the other
two domain controllers such that two sites get created, Site 1 and Site 2. DC03 is in Site 2 and DC02
and DC01-GC are in Site 1. Site 2 is a remote site with no local global catalog server but has
Universal Group Membership Caching enabled. In Site 2, the user Al logs on for the first time to
retrieve universal group membership from GC as normal. Following this, the domain controller, DC03
in Site 2 communicates with DC01-GC in Site 1 across WAN and gets a response in return. After

22 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

some time, the user Al logs on again to retrieve Universal Group Membership from local cache and
verifies cache after every eight hours.]

23 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Flexible Single Master Operations


Learning Objectives
After completing this topic, you should be able to
match each FSMO role to level that they operate at
match each FSMO role to its correct description

1. FSMO and the Infrastructure Master


So we've talked about placement of a regular writable domain controller versus a Read-Only Domain
Controller. We talked a little bit about the global catalog server and how we might use Universal
Group Membership Caching. Another important question has to do with unique roles and
responsibilities that are called FSMO roles, or Flexible Single Master Operation roles. Now, the
FSMO roles, as many of you might already know, are special operations that are hosted and done by
special domain controllers. In other words, many operations and tasks can be done by any domain
controller. In many ways, Active Directory is multimaster where you can create a user on any domain
controller and it will get replicated within that domain, and so there's no primary server and secondary
servers in most regards. Here are the exceptions. Now, the reason why this is an important factor in
designing an Active Directory is because the roles have special functions like, for instance, in that if
you are in a situation where you're creating batches of users, you need to create a lot of new objects
where there's a particular role that's responsible for that called the RID master. And so that server
needs to be available during those operations. Maybe, you have a situation where you need to create
a new domain, then you need to have that Domain Naming Master available. And then for time
synchronization and global group policy editing and password updating, there's another very
important server called the Primary Domain Controller Emulator, or the PDC Emulator. So these are
unique functions. Some of them are functions that are per domain. So each domain has one of them.
Each domain has, for instance, a PDC emulator and a RID master and then some of them are per
forest like the Domain Naming Master. You only have one of those per forest. The Schema Master,
the one that guards the schema, the core attributes of the database, they only have one of those per
forest. Now, which domain controllers in your location should play that role? And that's an important
question that you have to ask yourself, what domain controller in my organization would be the ideal
PDC emulator? Which one would be the ideal RID master? Well for that, let's take a deeper look at
what those FSMO roles do and then talk about what domain controllers, what considerations we
should have?

[Heading: Flexible Single Master Operations (FSMO). The example shows a forest root operations
master titled ug.ad, a domain operations master titled Japan.ug.ad and another domain operations
master titled USA.ug.ad. At the forest level, the Schema master defines the object and the domain
naming master identifies the site topologies. At the domain level, the PDC emulator handles
password changes and the time server, the RID master allocates relative IDs, and the Infrastructure
Master updates object references.]

2. FSMO role guidelines


Now, one of the questions that an administrator of an Active Directory environment has to be aware of
is where the Flexible Single Master Operation roles are placed throughout their network? Another
way to think of it is, "it's 6 o'clock in the morning, do you know where your FSMOs are?" I mean the
nature of the single master operation roles, the fact that there are great responsibilities held within

24 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

single domain controllers that are not available anywhere else means that we have to be careful
about where we think the placement of those things should go? There are a couple of hard and fast
rules. One of the main hard and fast rule is that in a multidomain environment in which not all domain
controllers are global catalog servers, you need to ensure that the Infrastructure Master is not on a
global catalog server, so that it can have its own database cross-referenced with the global catalog
database. Now, again, if we have a single domain environment, that's not a concern. If we have a
multidomain environment and all the domain controllers are global catalogs, but then they all have the
up-to-date accurate information of every domain, right, so there's no concern about name and SID
getting out of link based on shadow information. Now, beyond that, we know that it's important to
ensure that the single master operational roles don't go down. So, even more than standard domain
controllers, it's important to make sure that we're on resilient supportable hardware and, specifically,
remember that PDC emulator, you know how often do we build a new domain or site and need to talk
to the Domain Naming Master? Not that often. How often do we update the schema by installing
Exchange or upgrading to a new version of Active Directory? Not that often, but the PDC emulator
which is involved in every time an administrator edits group policy whenever the systems are syncing
their time the...every time there's a update to a password, right, all of these are going to be times
when that PDC emulator gets hit, so we want to ensure that it's in a good high-speed location
proximal to the, typically, the greatest number of users not located out on the periphery, right, not in
the wilds. The RID master also needs to be placed typically wherever the most administration is done
because, remember, as we create new objects, that's going to require a SID, which means containing
a RID and so we want that to be located where most of that administration is done for new objects
that have been created. Beyond that, again, we just want to make sure they're online and available
and that we're aware of the recovery process if necessary. So one of the things that we're going to be
looking at demonstrating is how to verify where your FSMOs are located and how to best manage
those down the road.

[Heading: FSMO Placement Design Guidelines. The FSMO placement design guidelines are as
follows: Domain controllers – All DCs hosting FSMO services should be designed with redundant
hardware like RAID and redundant network adapters and with high specification components like
memory and processors. Scripts should be run to validate their availability especially across WAN
links. Infrastructure Master – Do not place the global catalog on a domain controller that hosts the
domain's Infrastructure Master role unless all domain controllers in the domain are global catalog
servers or the forest has only one domain. However, the Infrastructure Master should be hosted on a
server in the same site as a domain controller hosting the global catalog. Domain Naming Master –
Place the Domain Naming Master on the forest root PDC. Unlike Windows 2000, there is no current
requirement that the Domain Naming Master be located on a global catalog server. Schema Master –
The Schema Master should be placed on a domain controller running on highly available hardware.
On smaller networks, the Domain Naming Master and the Schema Master can be placed on a
domain controller which acts as a global catalog server. On larger networks, it should be placed on a
domain controller serving the global catalog and hosted on highly available hardware. PDC emulator
– The PDC emulator should be placed on high availability hardware. To minimize network latency, the
PDC should be placed on the same domain controller hosting the RID Master role. The PDC may be
a heavy consumer of RIDs when supporting down-level applications. The PDC role should be hosted
on a centrally located domain controller. Placing it on a remote site is not good practice. Avoid placing
this role on a domain controller acting as a global catalog server. RID Master – The RID Master
should be placed in a site where most object creation occurs. Placing the RID Master in a physically
inaccessible site is bad practice.]

25 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

The Read-Only Domain Controller


Learning Objectives
After completing this topic, you should be able to
describe the Read-Only Domain Controller (RODC) in Server 2012 R2
recognize the differences between read-only DC and writable DC

1. Branch office systems and RODC


So we've been making sense out of FSMO placement, right. You got these certain domain controllers,
they've got to be in a certain location, but one of the considerations that we have to also bring into
play for any type of domain controller is this idea of branch offices. I mean we dealt with the idea of
sites that you want to control replication, but, Jason, as we are trying to put our best foot forward in
terms of defining Active Directory for branch office locations, what other kinds of considerations
should I be taking into account before I design that network? Well the branch office is a unique set of
problems for several reasons. Often you have a situation where you have these extended remote
offices with users who are providing some sort of a key function for the business. However, a lot of
the resources are centralized. They have situations, of course, where we're concerned about security,
so they still need to authenticate even though they're in the remote office. Being a branch office
doesn't make you a second class employee, in some cases, you're integral to the business, you
know, you're a first class employee, so we need to ensure that they authenticate and that they are
secure and they have the resources they need in order to accomplish that business's mission. So
when it comes to the placement of domain controllers, we have several factors that are going to be a
major issue. Some of those factors, of course, relate to that the domain controller, itself, is a machine
that has, well, secrets in it. It has passwords. It performs some really important function. So when we
put that out in a branch office, if we're not thinking about physical security and ensuring the integrity
of the overall infrastructure, well, then we're putting ourselves at risk. For example, Jacob, I know of
an organization and this was a few years ago, but they had a branch office employee who was an
administrator and he was an up-and-coming Active Directory administrator, but wasn't quite trained
well enough to not know to restore backups that were more than 60 days old. And in so doing, he did
it in the branch office and it cost data corruption throughout the entire infrastructure. So we have
concerns about lower level of expertise in those branch offices in some cases as well. So physical
security is a big issue, but at the same time, we need to provide authentication services in that
infrastructure for those branch office employees so there's attention. And then, of course, there's that
ever present slow link, maybe, it's a link that's not as reliable, one between that branch office and that
central office, so that's also a factor to consider.

[Heading: Branch Office Design Considerations. The branch office design considerations are as
follows: Data security – System data can be either more vulnerable due to the nature of the location
of a branch office or so high level that its presence at a specific location should be prevented.
Hardware security – The physical location is so insecure that important data must be protected on
devices that might be physically compromised or stolen. Expertise – Where the remote location is
such that local expertise is limited, the system must be configured to account for this and system
management should be judiciously delegated. Communications – Where links to remote location are
slow or unreliable, certain system options can be deployed to overcome the potential for the remote
systems and supporting data to become out of date. Replication – In an Active Directory-controlled
environment, the appropriate design of the site's topology is required to maintain the overall
synchronicity of the whole service infrastructure.]

26 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

You know, Jason, as you've just described all these challenges that an administrator faces in
designing a branch office, right, protecting it, to trying to ensure that we don't get data corruption that
be through malicious or even inadvertent attempts to rectify Active Directory. It seems like what I'm
seeing here is a list of potential solutions of which some, but probably not all, should be applied to
any particular branch office. Does that sound appropriate? Absolutely. These are great safeguards.
Active Directory is really come a long way in the last ten years and that we have more tools at our
disposal then we did for the branch office then we did before. So, like for instance, one good example
of this, Jacob, is the Read-Only Domain Controller. The Read-Only Domain Controller is a huge asset
to the infrastructure because it allows us to place a domain controller in those areas where we may
not have as much physical security or, maybe, we don't have as many trained individuals, but we can
still provide the authentication services for those 100, 50 users or whatever that branch office is
hosting so they can still perform their mission. The Read-Only Domain Controller doesn't contain
administrator passwords and secrets for the other parts of the infrastructure, it's restricted to just
those users in that branch office, so it exposes the risk. We have other ways of reducing the amount
of impact replication traffic, Active Directory replication traffic over that WAN link too and things like
Universal Group Membership Caching and these...all these various features really help us kind of
shape our Active Directory infrastructure so that we can project out to those branch offices, those
authentication services, but we're doing it with more confidence because we're doing it securely.

[Heading: Branch Office Systems. The various features of a branch office system are sites, Read-
Only Domain Controller, RODC DNS, RODC global catalog, RODC administration, Universal Group
Membership Caching, password replication policy, filter attributes set, confidential attributes, and
BranchCache.]

2. RODC versus WDC and BranchCache


I love what you're bringing out here, Jason, and I did just want to add one more thing. There is that
special krbtgt account. Many Active Directory administrators see that account in Active Directory and
just figure, well, I probably shouldn't delete it since it was built in, and are unaware of how important
that particular account is. It's associated with the process of actually receiving Kerberos tickets in the
Active Directory environment. But here's what makes it special on an RODC and I love this. When
you authenticate to an RODC, it is associated with a particular site that, that RODC is a part of. That
means that if I contact an RODC, it's able to give me a session ticket with anything in that local site,
anything where I am at. But, if for some reason, let's imagine that the RODC system was
compromised, someone obtained it then was reintroducing it later, and attempting to use it in order to
gain access to a resource. The cool thing is that if there already is a ticket created by one of those
RODCs and I try and use it to go get a resource that's up in the central office, it is invalidated in the
sense that just like crossing from one domain to another requires walking the trust path, right, and
contacting another domain controller. When I try and access a resource in the central site, it requires
that I contact my RODC who then contacts a writable DC or another DC in that remote site in order to
get a Kerberos ticket there. So, if there has been a situation where I'm aware that has occurred, well,
once the Active Directory administrators in the central site have removed that RODC account, then
that special krbtgt account which is only associated with that one site set of RODCs will be invalidated
and I will not be able to be authenticated to that remote resource, so the RODCs' tickets are only
locally available, so they only function in that branch office. Normal operation, it's like walking the trust
path and I'm able to get access to the other site's contents, but if it gets compromised, I can pull that
plug very easily so that it's not used against me. Good stuff. So let's make sure that we understand
that role separation and take a look at that in a demonstration so we can see again where the lines
are between domain administrators and the server administrators of an RODC. Another thing we
should talk about in the demonstration is a passive replication policy and that filtered attribute set.
Those are both features you're going to want to include in your design because it will improve the

27 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

security of your overall infrastructure and improve the management of that RODC. One of the things
that I wanted to bring out, it's something, in the demonstration, I think, one of us should bring out
anyway, depending on who does the demo, is that one of the things, I think, from a design point of
view is the PRP, that group, is global. So any users that you include with that group, those attributes
are going to be, you know that, you're going to basically, if you've got ten branch offices with users
who don't roam, passwords for all ten branch offices will be on all ten RODCs. So I think the better
practice is to create as your own groups for each branch office and use that to control password
replication so that an RODC at branch office A doesn't have passwords for branch office B, but it's
restricted to just their RODC.

[Heading: RODC versus WDC. The differences between a Read-Only Domain Controller and a
writable domain controller are: The Active Directory database – The AD database hosted on a RODC
is read-only and can only be updated after incoming replication from a writeable domain controller.
Interdomain controller data replication – Replication data (the Active Directory data and the SYSVOL
data) is only updated on an RODC from a writable domain controller. Data that is stored in the AD
database – By default, an RODC hosts a copy of the directory database, the SYSVOL folder, but
does not host any credential data. Any RODC-hosted credential data must be specifically configured.
Administration – RODC administration and management can be administered to standard domain
users. RODC specializes in security in the following ways. Replication security – Security is improved
since no malicious updates can originate from a branch office-hosted RODC. RODC filtered attribute
set (FAS) – The replication of application data to RODCs can be controlled within a forest by adding
attributes to the RODC FAS and then marking them as confidential. Password Replication Policy
(PRP) – By default, an RODC has a PRP that prevents passwords being cached on the RODC.
Special krbtgt account – An RODC has a special krbtgt account that also helps to restrict malicious
updates from affecting the rest of the forest. The RODC krbtgt is site specific, so if an RODC is
compromised, a security principal cannot use a ticket that has been maliciously created by a
compromised RODC to access resources in a different site. Administrator Role Separation (ARS)
provides a mechanism for Active Directory domain administrators to delegate both the installation and
the administration of RODCs to any standard domain user without granting them any additional rights
within the domain. RODC installation – An Active Directory administrator can prestage the RODC
account and Password Replication Policy. Then the delegated local administrator can attach to this
account. RODC management – The delegated RODC administrator can log on interactively (locally at
the actual domain controller) and perform periodic and routine maintenance tasks, such as upgrading
a driver or an application, installing other server roles, and performing offline optimization of the hard
disk drives.]

3. RODC attributes
Again one of the powerful aspects of an RODC is the fact that you do have the ability to control not
just, which passwords are going to be cached on it, but as we said, do attributes even make their way
to a particular RODC? But before we explain that, let me make sure that everyone understands that
in Active Directory, you have an attribute that applies to each of your objects called searchFlags. The
searchFlags attributes defined in the Active Directory schema has a lot of different possibilities in
terms of what it can define. The searchFlags attribute defines whether or not a particular attribute is
indexed, right. Do we build an alphabetical list of this for faster searching later on? It determines, for
example, whether or not when you select an object and copy it. Does the duplicate of an original
object copy that particular attribute? If we are looking at a schema editor, right, that you registered the
MMC, we're looking at the schema management MMC. Well you can see a couple of those defined
index, the attributes, right, is it copied with duplicating the user? But all of that actually gets meshed
into a single attribute value called searchFlags. You can actually directly view the searchFlags
attribute by using ADSI Edit. Remember that's like Regedit. Be careful with it. You don't want to use

28 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

this tool, unless you know what you're doing, you got a specific plan in mind. And what you can see in
this table is that the searchFlags attribute is an aggregate value. So when we see a value of and
that's hexadecimal 11 that basically means that we have enabled the 1 bit, which says to index it, and
we've added the 16 bit, right, in hexadecimals, so it's a 1 there. So 11 means yes, index it, yes, copy
it. Again those same two checkmarks that you see there in the schema editor. Now, you'll notice if you
look through this table that we have a lot of different possibilities here, okay. A lot of different
techniques or values that we could ascribe to the searchFlags attribute to get this particular attribute
to perform different actions. Let's take a look here at what we can do with regard to managing
attributes for an RODC.

[Heading: Schema – Checking searchFlags Attribute Status. The searchFlags attribute in the Active
Directory Schema controls the behavior of domain controller indexing and data management. Some
options are available in the Schema editor. Others must be edited manually using ADSI Edit, LDP, or
LDIFDE. A sample schema editor and a sample ADSI Edit are displayed. The sample schema editor
has checkbox options for Index this attribute and Attribute is copied when duplicating a user. The
Attribute Editor tab of the ADSI Edit is open. The Attribute Editor tab has a searchFlags attribute with
the value of 0x11 = (INDEX I COPY). The values of the searchFlags attributes are displayed in a
table. The table has ten rows and three columns. The entries in the first row are 1, (0x00000001), and
(INDEX). The entries in the second row are 2, (0x00000002), and (CONTAINER_INDEX). The entries
in the third row are 4, (0x00000004), and (ANR). The entries in the fourth row are 8, (0x00000008),
and (PRESERVE_ON_DELETE). The entries in the fifth row are 16, (0x00000010), and (COPY). The
entries in the sixth row are 32, (0x00000020), and (TUPLE_INDEX). The entries in the seventh row
are 64, (0x00000040), and (SUBTREE_INDEX). The entries in the eighth row are 128, (0x00000080),
and (CONFIDENTIAL). The entries in the ninth row are 256, (0x00000100), and
(NEVER_AUDIT_VALUE) and the entries in the tenth row are 512, (0x00000200), and
(RODC_FILTERED).]

One of the powerful ways, then, that we can use this searchFlags attribute is going to be to protect
data that you just don't want pervading your network indiscriminately. And there's two very useful
attributes that you should be aware of that we can embed essentially in the searchFlags attribute –
CONFIDENTIAL and RODC_FILTERED. Now, CONFIDENTIAL means that there will be extra
permissions needed in order to be able to view this particular object instead of just the standard read
access, you also need a special permission in Active Directory that's available called control access.
So if the user does not have read and control access permissions, then they'll not be able to read the
contents of this attribute, so it's a interesting attribute to apply. Typically, what you'll do is you'll allow
applications accounts, service accounts to have this extra control access permission to this specific
attribute that we're locking down with this confidential attribute, might be something like a government
ID that is used to synchronize content between multiple databases because you can't always trust
that the names will be written the same way or that any proprietary ID number can be synchronized
across different database systems. But the government ID will always be the same, but you don't
want to allow users to read it. You don't want that data getting out. In fact, you may have government
regulations that require you to ensure that you are protecting the data to that degree, so confidential
is a great way to do that. But what if you don't even want that attributes stored out in our RODC? Well
then, in addition to enabling the 128 confidential attribute, we might enable the 512, the RODC
filtered attribute. When you add that value into the searchFlags aggregate number, then this attribute
simply doesn't make it downstream to the RODC because the RODC receives a filtered attribute set.
Now, the global catalogs receive the partial attribute set, right, that's what is only the things that are
worth sending to a global catalog for universal search. RODCs receive the filtered attribute set, which
means that there is certain attributes that that are screened out. By default, there are very few things
that are screened out. They relate to certificate to ask, tie-ins and things like that, but we can add for
our own custom attributes that we might embed in Active Directory. This RODC filtered searchFlags

29 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

trigger along with the indexing it, determining what happens on copying and accounting all the other
values that we can embed into searchFlags.

[Heading: searchFlags – Confidential. To protect sensitive custom data integrated into Active
Directory, it is recommended to enable the Confidential and RODC Filter bits in the searchFlags
attribute. The CN=carLicense Properties dialog box is displayed. The dialog box has two tabs –
Attribute Editor and Security. The Attribute Editor tab is open. In the Attribute Editor tab, the
searchFlags attribute is highlighted. A table shows the values of the searchFlags attribute. The table
has four rows and three columns. The entries in the first row are 1, (0x00000001), and (INDEX). The
entries in the second row are +128, (0x00000080), and (CONFIDENTIAL). The entries in the third
row are +512, (0x00000200), and (RODC_FILTERED) and the entries in the fourth row are 641;
(0x00000281); and Indexed, Confidential, FAS. Next the Integer Attribute Editor is displayed. The
standard text in the Integer Attribute Editor reads Attribute: searchFlags. Below the standard text is
the Value text box with the value of 641.]

Alright, gang, we've been talking about attributes. And in this case, we're looking at the attribute editor
of an RODC system. And what we're trying to emphasize here is that there's another attribute that
you need to be aware of, a pair of them, that are specific to our RODCs. You have an attribute called
the msDS-RevealOnDemandGroup and the msDS-NeverRevealGroup, and these are the two groups
that control the behavior of Read-Only Domain Controller, credential caching, password cache, right. I
mean, we know that an RODC doesn't keep passwords by default, right. They're not receiving it in
their replicated sets, but they can, in certain circumstances, cached passwords for the users that are
using that particular RODC so that we can, you know, speed up that logon process, get that to, that
ball rolling a little bit faster for the users that are in that site. Again they're not second class citizens.
We want to give them the best performance possible, but with that said, Jason, I think we've definitely
need to make sure that we're looking at kind of best practices, right, and what are some of the
considerations we're going to want to take into account with regard to how these password replication
policies and groups are managed. Remember the job of the RODC is to play a role of a domain
controller so it really has all of the information about Active Directory in it with the exception of those
attributes that are sensitive. We call those the filtered attribute set and with secret, so we don't include
passwords. So, in terms of a best practice here, Jacob, one of the things we can do is we can, well,
let me back up for a minute. The basic functionality is such that if the branch user goes to
authenticate to our Read-Only Domain Controller, it can't. It can't because there are no secrets or
passwords on that Read-Only Domain Controllers, it's read-only. So a referral system is built in and
so what will happen is the RODC will contact a writable domain controller and basically proxy for the
user for that authentication attempt. Now, that branch office user needs to continue to function and
continue to do their job and login the very next day. Well that same procedure will take place, but
what happens if the WAN link is not available. Now, WAN link is not available, that user is going to
need to basically authenticate, right, that user is going to need to authenticate to that local domain
controller, so proxy that...proxy authentication is going to fail. It will make no difference. So what we
want to do is we want to preserve and protect Active Directory, at the same time, we want to permit
those local branch office users to authenticate even if the WAN link's not available. And in order to
accommodate that situation, I can use this password replication policy, this password replication
group that allows me to cache passwords just for those local branch office users. So I actually identify
what users I want to permit and what users I don't want to permit. Now, there's an important point
here, Jacob, and that is that allow group. Well it is possible if I have a user who logs into one branch
office and then goes to another branch office that their password might get cached on multiple
RODCs. If I don't like that idea and I want to restrict a user to a very specific RODC, and I may not
use the built-in group, I might want to create my own groups to actually accommodate that to facilitate
password caching. So that's another thing to consider when it comes to the password replications.
You're not limited to the default groups here. You can actually create your own password replication

30 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

group for each one of your RODCs.

[Heading: RODC Credential Caching. The Password Replication Policy on each RODC server is
based on the msDS-RevealOnDemandGroup and msDS-NeverRevealGroup values that control
password caching by referencing other Active Directory groups. The Attribute Editor tab of the
SERVER2012-V10 Properties dialog box is displayed. The msDS-RevealOnDemandGroup attribute
caches passwords for the Allowed RODC Password Replication Group. The msDS-
NeverRevealGroup caches passwords for the Denied RODC Password Replication Group, account
operators, server operators, backup operators, and administrators. The Denied RODC Password
Replication Group consists of cert publishers, domain admins, domain controllers, enterprise admins,
schema admins, Group Policy creator owners, Read-Only Domain Controllers, and the Kerberos
krbtgt domain-wide account.]

31 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Domain Controller Virtualization


Learning Objective
After completing this topic, you should be able to
identify characteristics of Domain Controller cloning and virtualization

1. Demo: Clone a virtual DC


A virtual domain controller allows us the flexibility of virtualization with the ability to rapidly deploy this
operating system with this key role. Domain controllers have to initially have a lot of download of new
data with the idea of replicating a domain controller has a lot of advantages for quick deployment, but
there is always a concern about making sure that your domain controllers are going to be safe if we
were, for example, doing something like a rollback to a checkpoint or a snapshot. Well Server 2012
enables us to safely clone our DCs. You got to make sure you've got a 2012 Hyper-V, you've got a
have a 2012 virtual domain controller in place and we need to make sure that our PDC emulator is
running Server 2012 as well. If all that's the case, then we have a five-step operation that essentially
gets us a clone. The process is going to involve first taking our domain controller and adding it to a
special trusted group, then we're going to create a special XML file on that domain controller in order
to enable the functionality that we need to allow for clones to be created correctly. If we need to, we
can create an exclusions list to enable certain applications to go along for the ride besides Active
Directory and then, it's just going to be a matter of shutting it down, exporting it, and then importing it.

[The Active Directory Users and Computers window is displayed. The File, Action, View, and Help
menus are at the top of the window. Below the menu bar is the toolbar containing buttons such as
Back and Forward buttons. Below the toolbar, the rest of the interface is divided into two areas. The
area to the left consists of a directory structure with nodes and subnodes and the area to the right
shows information about the node or subnode that is selected in the area to the left. In the area to the
left, the main node is the Active Directory Users and Computers node. This node has the Saved
Queries subnode and the earthfarm.LAB subnode. The earthfarm.LAB subnode is expanded and has
several subnodes such as the Domain Controllers subnode and the Users subnode. The Domain
Controllers subnode is selected. The area to the right contains a table with five headers – Name,
Type, DC Type, Site, and Description. Below these headers are four rows of entries. In the second
row, the entry under the Name header is DC2, the entry under the Type header is Computer, the entry
under the DC Type header is GC, and the entry under the Site header is Portland. There is no entry
under the Description header. The second row is highlighted.]

So first, let's take a look here in the Users group. There's a...the Users' container has a group called
Cloneable Domain Controllers and we open that up and take a look at the members' list. There's
nothing in here and you really should only add accounts while you're in the midst of cloning. I want to
add DC2, but you'll notice it's not here and that's just because Computers are not added by default
to groups. Let's try that again, there is the DC2 account, all right. So DC2 is now a cloneable domain
controller, excellent. What is next? Well, we're on DC2 right now and on DC2, we're going to need to
actually run a particular PowerShell commandlet called the New-ADDCCloneConfigFile
commandlet. We could build this file manually, but by all means use PowerShell, it's going to be a
nice, safe way to do it and with the ISE, you can see the different ways that we can set this up.

[The Active Directory Users and Computers window is displayed. The instructor clicks the Users
subnode in the area to the left and information about the Users subnode is displayed in the area to
the right. The area to the right consists of a table with three headers – Name, Type, and Description.

32 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

There are several rows of entries under each header. The instructor clicks the Cloneable Domain
Controllers entry under the Name header to open the Cloneable Domain Controllers Properties dialog
box. The dialog box has the General, Members, Member Of, and Managed By tabs. The General tab
is selected by default. The instructor clicks the Members tab to open the tabbed page. The tabbed
page has the Members section, which consists of a table with two headers Name and Active
Directory Domain Services Folder. There are no entries under these two headers. Below the
Members section are the Add and Remove buttons. The instructor clicks the Add button to open the
Select Users, Contacts, Computers, Service Accounts, or Groups dialog box. The dialog box has the
Select this object type text box with the default entry of Users, Service Accounts, Groups, or Other
objects. Next to the text box is the Object Types button. Below the text box is the From this location
text box with the default entry of earthfarm.LAB. Next to this text box is the Locations button. Below
the text box is the Enter the object names to select (examples) text box. Next to this text box is the
Check Names button. Below the text box are the Advanced, OK, and Cancel buttons. The instructor
clicks the Advanced button to open the Select Users, Contacts, Computers, Service Accounts, or
Groups dialog box. The dialog box has the Select this object type text box with the default entry of
Users, Service Accounts, Groups, or Other objects. Next to this text box is the Object Types button.
Below this text box is the From this location text box with the default entry of earthfarm.LAB. Next to
this text box is the Locations button. Below the From this location text box is the Common Queries
section. Next to the Common Queries section are the Columns, Find Now, and Stop buttons. Below
the Common Queries section is the Search results section. The Search results section consists of a
table with four headers – Name, E-Mail Address, Description, and In Folder. The instructor clicks the
Find Now button and the table in the Search results section is populated with new entries. The
instructor scrolls through the entries in the table and then clicks the Object Types button to open the
Object Types dialog box. The standard text in the Object Types dialog box reads, "Select the types of
objects you want to find." Below the standard text is the Object types section. The Object types
section has the Other objects, Contacts, Service Accounts, Computers, Groups, and Users
checkboxes. The instructor checks the Computers checkbox and clicks the OK button to close the
dialog box. The Select Users, Contacts, Computers, Service Accounts, or Groups dialog box is again
displayed. The instructor clicks the Find Now button again. The DC2 entry is now displayed under the
Name header in the table in the Search results section. The instructor selects the DC2 entry and
clicks the OK button to open the Select Users, Contacts, Computers, Service Accounts, or Groups
dialog box. The Enter the object names to select (examples) text box is now populated with the DC2
entry. The instructor clicks the OK button to close the Select Users, Contacts, Computers, Service
Accounts, or Groups dialog box. The Cloneable Domain Controllers Properties dialog box is now
displayed. On the Members tab, the table in the Members section is now populated with the DC2
entry. The instructor then clicks the OK button to close the Cloneable Domain Controllers Properties
dialog box. The Active Directory Users and Computers interface is displayed. Next the instructor
clicks the PowerShell icon pinned to the taskbar to open the PowerShell window. The window
consists of a command prompt window on the left and a Commands section on the right. The
Commands section has the Modules drop-down list box, the Refresh button, and the Name text box.
Below the Name text box is a scrollable section with various entries. The instructor highlights the
New-ADDCCloneConfigFile entry in the scrollable section. Below this section are the
IPv6DynamicSettings, IPv6StaticSettings, OfflineExecution, IPv4DynamicSettings, and
IPv4StaticSettings tabs. The IPv4DynamicSettings tab is selected by default. This tab has the
CloneComputerName, IPv4DNSResolver, Path, and SiteName text boxes. The Run, Insert, and Copy
buttons are at the bottom of the Commands section.]

Essentially, we're going to be defining as much or as little that we want to identify about the computer
account that's going to be created through the cloning process so that when it recognizes, "hey, I'm
not the original, I'm a clone." It can then update itself, maybe, with a specific name if you're doing a
one-time clone, the identity of where to find DNS, the path for where this file should be stored, and

33 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

the site where Active Directory is going to be resident. What site should it belong to? We can expand
that out and provide the exact IP address, the DNS resolver, subnet mask, Win server, default
gateway, we can add all these properties as well and we can do a very similar set of options with
regard to IPv6 if we're using that and we also have an offline option, if necessary, for later
deployment.

[The PowerShell window is displayed. The instructor highlights the CloneComputerName,


IPv4DNSResolver, Path, and SiteName text boxes. Then the instructor clicks the IPv4StaticSettings
tab to open the IPv4StaticSettings tabbed page. The tabbed page has several text boxes such as the
IPv4Address, IPv4DNSResolver, and IPv4SubnetMask textboxes. The instructor then clicks the
IPv6StaticSettings and the OfflineExecution tabs to open them.]

So I'm going to keep it simple here because often with virtualization, having dynamic IP addresses
through DHCP is a preferred process. I'm going to let it generate its own computer name and I'll
specify, make sure there's no doubts about where the DNS server is and I'll identify that it can use the
default path, but I want to ensure that's going to be joining the site called Portland as it becomes a
domain controller. So, when we run this, it's going to run some checks, it validates that the PDC
emulator is running Server 2012, that's green. It validates that this server is a member of the
'Cloneable Domain Controllers' group, that's green, we're looking good. It looks to see "am I running
DHCP or any other services that are not listed as allowed applications?" Essentially, if you have any
other roles besides being a DC setup, you want to strip those out first before you go through this
cloning XML process. Once it's validated that, it goes ahead and creates this XML file stored in the
NTDS directory. It can also be stored at the root of a removable drive or wherever NTDS.DIT is found
and it will be discovered automatically by the process of booting this system after it's been cloned,
booting the clone essentially.

[The PowerShell window is displayed. The instructor clicks the IPv6DynamicSettings tab to open it. In
this tab, the instructor types 192.168.0.200 in the IPv4DNSResolver text box. Then the instructor
types Portland in the SiteName text box and clicks the Run button. The command prompt window on
the left is populated with new entries, some of which are highlighted in green and yellow. The entries
that are highlighted in green are Passed: The domain controller hosting the PDC FSMO role
(DC.earthframe.LAB) was located and running Windows Server 2012 or later, Pass: The local domain
controller is a member of the 'Cloneable Domain Controllers' group, Pass: No excluded applications
were detected, and All preliminary validation checks passed. The entry highlighted in yellow is No
excluded applications were detected. The instructor then points to the entries highlighted in green
and yellow. Then the instructor highlights the entry that reads C: Windows\NTDS
\DCCloneConfig.xml.]

So the file's there, it's in the right place, my work here is done. So now, since I don't need to create a
custom XML exception list, which by the way, if I did, the command there is Get-
ADDCCloningExcludedApplicationList -GenerateXml. There aren't any applications,
but if there were, then it would generate an XML file and then that XML file would automatically be
used to allow those applications to pass through. So, at this point, the last steps here really deal with
virtualization. So I'm going to Shut down this domain controller, make sure I'm doing it right here,
okay, and so that DC is now shutting down.

[The PowerShell interface is displayed. The last line of the command prompt window reads PS
C:\Users\Administrator.EARTHFARM>. Next to this line, the instructor types Get-
ADDCCloningExcludedApplicationList –GenerateXml and presses the Enter key. The command
prompt window is populated with a yellow highlighted new entry that reads No excluded applications
were detected. The instructor then shuts down the domain controller. The Hyper-V Manager window
is displayed. The window is divided into two areas. The area to the left consists of a directory

34 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

structure with a main node, the Hyper-V Manager node, and a subnode, the DC subnode. The area to
the right consists of two sections – Virtual Machines and Checkpoints. The Virtual Machines section
contains a table which is not completely visible. Only five columns are shown. The five column
headers are Name, State, CPU Usage, Assigned Memory, and Uptime. There are several rows in the
table. The third row is highlighted and it states Name is DC2, State is Running, CPU Usage is 3%,
Assigned Memory is 1154 MB, and Uptime is 00:15:36. The DC2 row in the Virtual Machines section
is highlighted.]

Once it is in a shut down state, all I'm going to need to do is export and import, okay, it's a two-step
process. I'm going to Export it to a particular directory wherever it is, I want to be able to track it
down. And I've already exported this for the benefit of making this little faster because exporting takes
a while. And then, you'll Import Virtual Machine. So you'll browse to that location wherever you
exported it. There we go, found the machine, it's important that you choose to Copy the virtual
machine (create a new unique ID) because part of what it's going to do when it boots is recognize
that the external ID of the virtual machine is different than the saved Active Directory internal version
of that virtual machine and, therefore, because there's a discrepancy, it recognizes it should be a new
domain controller.

[The Hyper-V Manager window is displayed. The instructor right-clicks the DC2 entry and selects the
Export shortcut menu option to open the Export Virtual Machine dialog box. The dialog box has the
standard text that reads, "Specify where you want to save the files." Below the standard text are the
Location text box and the Browse, Export, and Cancel buttons. The instructor clicks the Browse
button to open the Select Folder dialog box and navigates to the ClonableDC folder. The ClonableDC
folder contains the DC2 folder. The instructor then clicks the Cancel button to close the Select Folder
dialog box. Then the instructor clicks the Cancel button on the Export Virtual Machine dialog box to
close it. The instructor then right-clicks the DC subnode in the area to the left to open the shortcut
menu and selects the Import Virtual Machine shortcut menu option to open the Import Virtual Machine
wizard. The header of the first page of the wizard reads Before You Begin. The instructor clicks the
Next button on the first page of the wizard to navigate to the second page. The header of the second
page reads, "Locate Folder." The standard text in the second page reads, "Specify the folder
containing the virtual machine to import." Below the standard text are the Folder text box and the
Browse button. The instructor clicks the Browse button to open the Select Folder dialog box and
navigates to the ClonableDC folder. The ClonableDC folder contains the DC2 folder. The instructor
selects the DC2 folder and clicks the Select Folder button in the Select Folder dialog box. The Select
Folder dialog box disappears and the second page of the wizard is again displayed. The Folder text
box is now populated with the entry D:\Exports\ClonableDC\DC2. The instructor then clicks the Next
button to navigate to the third page of the wizard. The header of the third page reads Select Virtual
Machine. The third page of the wizard has the Select the virtual machine to import section. This
section has a table with the Name header and the Date Created header. The entry under the Name
header reads DC2 and the entry under the Date Created header reads 9/27/2013 9:45:54 PM. The
instructor then clicks the Next button to navigate to the fourth page of the wizard. The header of the
fourth page reads Chose Import Type. The fourth page of the wizard has the standard text Choose
the type of import to perform. Below the standard text are three radio buttons – Register the virtual
machine in-place (use the existing unique ID), Restore the virtual machine (use the existing unique
ID), and Copy the virtual machine (create a new unique ID). The Register the virtual machine in-place
(use the existing unique ID) is selected by default. The instructor selects the Copy the virtual machine
(create a new unique ID) radio button and selects the Next button to open the fifth page of the
wizard.]

Whenever we're importing something into the same Hyper-V system, it's important to set up a good
directory location that'll be unique from the original location. That's because...that's especially true

35 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

with regard to the hard drive files because the hard drive file will be named exactly the same thing
even though you'll have a unique XML data file for the configuration. So I'm placing mine here in a
separate directory. And again, I'll use that same directory here, okay. At that point, I'll be ready to
import it.

[The fifth page of the Import Virtual Machine wizard is displayed. The header of the fifth page reads
Choose Folders for Virtual Machine Files. This page has the Store the virtual machine in a different
location checkbox. Below this checkbox are the Virtual machine configuration folder text box, the
Checkpoint store text box, and the Smart Paging folder text box. The entry in the Virtual machine
configuration folder text box is D:\HyperV\. The entry in the Checkpoint store text box is D:\HyperV
\SQL2014\. The entry in the Smart Paging folder is D:\HyperV\SQL2014\. Next to each text box is the
Browse button. There are three text boxes and three Browse buttons. The three text boxes and three
Browse buttons are grayed out. The instructor clicks the Store the virtual machine in a different
location checkbox and the text boxes and buttons become selectable. The instructor clicks the
Browse button next to the Virtual machine configuration folder text box to open the Search Folder
dialog box and navigates to the NewClonedDC folder. Then the instructor clicks the Cancel button to
close the Search Folder dialog box. The instructor then clicks the Next button to navigate to the sixth
page of the wizard. The standard text in the sixth page reads Where do you want to store the
imported virtual hard disks for this virtual machine? Below the standard text is the Location text box
with the entry D:\HyperV\. Next to the text box is the Browse button. The instructor clicks the Browse
button to open the Search Folder dialog box and navigates to the NewClonedDC folder. The
instructor then clicks the Select Folder button on the Select Folder dialog box. The sixth page of the
wizard is again displayed. The entry in the Location text box now reads D:\HyperV\NewClonedDC\.
Then the instructor clicks the Next button and the Import wizard pop-up box appears. The standard
text in the pop-up box reads Hyper-V encountered an error while copying virtual hard disks to
destination folder 'D:\HyperV\NewClonedDC.' The file 'D:\HyperV\NewClonedDC
\BrocaderoDC1.vhdx' already exists. The instructor clicks the Close button to close the Import Wizard
pop-up box.]

Again I've already done the import. Then when we boot these two virtual machines after the import
process. The clone, it's going to go through the normal boot process, detect the fact that it is not in
the same state as the original DC, and then that will trigger the process of detecting and then
changing its configuration according to the XML file properties that are deposited there in the NTDS
folder.

[The sixth page of the wizard is displayed. The instructor clicks the Cancel button to close the wizard
and the Hyper-V Manager interface is again displayed. In the Virtual Machines section, the instructor
right-clicks the DC2 entry and selects the Start shortcut menu option. Then the instructor right-clicks
the NewClonedDC2 entry and selects the Resume shortcut menu option. The NewCloneof DC2 on
DC – Virtual Machine Connection window and the Connect to NewCloneof DC2 dialog box appear.
The instructor clicks the Connect button on the Connect to NewCloneof DC2 dialog box to close it.
The screen now shows only the NewCloneof DC2 on DC – Virtual Machine Connection window. The
standard text in the window reads, "Please wait."]

Our original DC is going to boot as normal. By the way, the virtual machine name, when it comes in
as a Hyper-V machine is going to be exactly the same name as the original one. The default
computer name, if you didn't specify one, is going to be the first eight characters of the old computer
name, dash and then, essentially, a four-digit number that'll increment up to 9,999 as a clone, so it'll
identify that for us.

[The NewCloneof DC2 on DC – Virtual Machine Connection window is displayed. The standard text in
the window reads, "Please wait."]

36 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Planning an Active Directory Site


Learning Objective
After completing this topic, you should be able to
plan an Active Directory site design according to specific requirements

1. Design Active Directory sites


Now that you've seen how to design an Active Directory physical topology in Windows Server 2012
R2, let's try an exercise.

Branch office users for Brocadero Shipping at the Chicago location are experiencing slower than
usual logons.

You need to optimize user logon traffic without overloading the remote site with global catalog
replication traffic.

A new domain controller replica has been installed at the Chicago location. Not all of Brocadero's
sites are fully routable, so you need to

prevent the domain controller in Chicago from creating replication objects with domain controllers in
two other sites, Toronto and Phoenix.

Question

You are configuring the replication topology on one of the domain controllers through Active
Directory Sites and Services.

Options:

1. Uncheck Global Catalog


2. Connections tab
3. Query Policy drop-down

Answer

Option 1: Configuring a domain controller to no longer host the global catalog information is
simply a matter of removing the checkmark on the server's NTDS properties page. The
activation of the global catalog service can be carried out on the domain controller's
properties on the Active Directory Sites and Services console, or, on the Active Directory
Users and Computers console – directly on the computer account properties.

Option 2: Selecting the Connections tab will show the replication partners, including the
Name and the Site replication is occurring from and to.

Option 3: Selecting this drop-down menu will show the list of available query policies that
can be applied. It can also be left blank.

37 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Correct answer(s):

1. Uncheck Global Catalog

Question

You have completed designing your sites and IP networks and are continuing with your site
link and site link bridge design.

What characteristics of site link bridges do you consider when building your design?

Options:

1. Allow sets of site links to be treated as a single route


2. AD objects that represent logical paths used for replication
3. Automatically build temporary connections to replication partners
4. Permit DCs that are not directly connected to replicate

Answer

Option 1: Correct. A site link bridge is an Active Directory object that represents a set of
site links, all of whose sites can communicate by using a common transport.

Option 2: Incorrect. This is a feature of site links. Sites are manually linked to other sites by
using configured site links which enable the replication of directory changes between site
domain controllers.

Option 3: Incorrect. This describes replication failover functionality, where sites ensure that
replication is routed around network failures, offline domain controllers, and adjustments in
topology.

Option 4: Correct. Site link bridges are a mechanism to logically represent transitive
physical connectivity between otherwise isolated sites. Site link bridges enable domain
controllers that are not directly connected by means of a communication link to replicate
with each other.

Correct answer(s):

1. Allow sets of site links to be treated as a single route


4. Permit DCs that are not directly connected to replicate

38 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Question

Your design recommends that remote offices, with a small number of users and applications
requiring global catalog information, have Universal Group Membership Caching (UGMC)
implemented on the server.

Options:

1. NTDS Site Settings


2. NTDS Settings
3. DC9232

Answer

Option 1: UGMC is enabled on the NTDS Site Settings object for the site in Active
Directory Domain Services (AD DS) from the Active Directory Sites and Services MMC
snap-in.

Option 2: The Properties page of the NTDS Settings page is used to enable or disable the
global catalog but does not provide the option for Universal Group Membership Caching.

Option 3: The Properties page for the server has server-specific information, including the
preferred transport protocol used for intersite replication and bridgehead functionality.
However, it is not used to enable Universal Group Membership Caching.

Correct answer(s):

1. NTDS Site Settings

Question

You are reviewing the five FSMO roles that can be within an Active Directory forest; three
that operate at the domain level within each domain in the forest, and the remaining two that
operate at the forest level serving all domains.

Match the Active Directory FSMO roles to their descriptions.

Options:

A. Schema Master
B. Domain Naming Master
C. Relative Identifier (RID) Master Role
D. Infrastructure Master Role

39 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

E. Primary Domain Controller (PDC) Emulator Role

Targets:

1. Replicates changes to all other domain controllers in a forest, which hosts a


replica of the schema
2. Ensures uniqueness of domain and application partition names within the
forest; must be accessible when a domain is being created or changed
3. Provides a security identifier to all objects created within a domain; used to
grant permissions and rights
4. Maintains the integrity of Active Directory objects and prevents intra-domain
and inter-domain naming conflicts
5. Provides the master time source for the domain

Answer

All domains in a forest share a single schema, and the domain controller hosting the
schema master database is responsible for replicating any schema changes to all other
domain controllers in a forest.

The central role of the domain naming master is to ensure uniqueness of domain and
application partition names within the forest. The domain naming master is employed when
a new domain is being created within a forest, or when a domain is being renamed or
removed within the forest. When installing and naming a new domain, the given name is
checked for uniqueness.

All objects created within a domain are given a security identifier, or SID, and the domain
controller which hosts the RID role is responsible for distributing RID blocks to the domain
controllers within the domain from its pool of unique RID numbers.

The Infrastructure Master Role, or IM, acts as a monitoring tool and is used to maintain the
integrity of Active Directory objects. The IM updates references to object and also prevents
intra-domain and inter-domain naming conflicts. It updates group-to-user and group-to-
group references when the members of groups or groups are renamed or changed.

The PDC emulator role serves several functions within each domain of an Active Directory
forest, including providing the master time source for the domain. The PDC emulator role
uses its host domain controller local clock time using the Windows Time Service.

Correct answer(s):

Target 1 = Option A

Target 2 = Option B

Target 3 = Option C

Target 4 = Option D

Target 5 = Option E

40 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

Question

You are working on the domain controller design of one of the branch offices and have
various factors to consider. Match the outlined requirements to the implementation of either
a writable domain controller (WDC) or Read-Only Domain Controller (RODC).

More than one option can match to a target.

Options:

A. Uses incoming replication only


B. Caches users' password credentials
C. Administrated by standard domain users
D. Performs outbound replication
E. Restricted from originating changes

Targets:

1. RODC
2. WDC

Answer

The AD database hosted on the RODC is read-only, and can only be updated after
incoming replication from a writable domain controller.

RODC administration and management can be performed by a standard domain users. A


WDC can only be managed by members of the Domain Admins or Enterprise Admins
security groups.

RODCs are restricted from originating changes to the master AD database.

By default, the RODC has a Password Replication Policy (PRP) that prevents passwords
being cached on the RODC. This default configuration means no account passwords can
be obtained from a compromised RODC.

WDC replication is 'pull' in nature, and occurs between any writable domain controller
replication partners. Outbound replication is not permitted from an RODC.

Correct answer(s):

Target 1 = Option A, Option C, Option E

Target 2 = Option B, Option D

41 of 42 3/20/2020, 3:16 PM
Skillsoft Course Transcript https://cdnlibrary.skillport.com/courseware/Content/cca/ws_imin_b02_it...

© 2018 Skillsoft Ireland Limited

42 of 42 3/20/2020, 3:16 PM

You might also like