Step by Step Guide to
Implementing a TWO NODE
Failover Cluster – Part 01
By Nirmal Madhawa Thewarathanthri in Personal on 3 March, 2011
4 comments
In this blog post I will be demonstrating how we can create a TWO NODE
failover cluster using Windows Server 2008 Failover cluster capabilities.
First part of my blog post will contain information on how we can prepare
both nodes before creating a cluster.
First we need to look at basic requirements which need to prepare a
Failover Cluster.
Make storage available to all nodes
Configure Network adapters on all nodes
Cluster components to be “Certified for Windows Server 2008”
I have configured two networks for Public network and also Private
Network.
Let’s look at Public and Private network configuration
Public Adapter -> we configure with IP Address, Subnet Mask ,Default
Gateway and DNS
Private Adapter -> I will only provide IP Address and Subnet Mask
Next let’s proceed and configure Storage Connectivity. I will be connecting
to my storage using the build-in iSCSI initiator
After selecting iSCSI Initiator, first it will prompt you to start iSCSI service,
which is set to start manually by default on Windows Server 2008 / Windows
Server 2008 R2.
After that we need to allow iSCSI to communicate through Windows
Firewall. This will be automatically configured for you when you select Yes
After this step, you will find iSCSI initiator properties.
Navigate to discovery tab and select “Add Portal” option for us to connect
with our shared storage
We need to provide with an IP Address or DNS name for iSCSI target
Once we enter required information, we can proceed
Next we need to navigate to Targets TAB which will allow us to connect with
the storage. Select Log on option
We need to allow following options when connecting with the storage
– Automatically restore this connection the computer starts
After completing this step, we have now successfully configured our storage
connectivity.
Let’s go ahead and prepare disks using DISK Management utility
After accessing DISK management console, we can notice that there’s a new
DISK added .
We need to right click on the DISK and bring it ONLINE
After we bring this DISK online, we can then go ahead and initialize this disk
We need to select which disk, we need to initialize so that it can be access
by Logical Disk Manager. We will configure this disk as a MBR style
partition disk
Now that we can access this disk, we will go ahead and create a volume
In this window, we will be specifying the size of the disk
Next we need to select a drive letter
Next, we need to change the volume label, and also select “Perform a quick
format” option
Now that we have already configured, it’s possible for us to view this disk
from Disk Management console
Now let’s move to NODE 02, on NODE 02 we need to bring DISK 1 online,
which we have already configured on NODE 01
After refreshing disk management console, we can see that it’s no more in
unallocated state, so we only need to bring it online now
Since we have completed configuring both disks, Next we will proceed and
run validation using Failover Cluster Management console. Before
proceeding, we need to go to Server Manager on both servers and install
Failover Clustering which is located under features section.
In my next blog post I will publish how we can run Failover Cluster
Validation Tool and validate configuration.
Step by Step Guide to
Implementing a TWO NODE
Failover Cluster – Part 02
By Nirmal Madhawa Thewarathanthri in Personal on 4 March, 2011
No comments
As I have demonstrated how we prepare Cluster nodes for a two node
failover cluster, in this post let’s have a look at how we can use Cluster
Validation Tool to generate a report. I will be using Windows Server 2008
for this demonstration and before moving any further let’s have a look at
what tests are performed using Cluster Validation Tool.
Cluster Validation tool is a wizard found on Failover Cluster Management
Console. Following tests are carried out when you perform cluster
Validation Tool
· Inventory Validation
· Network Validation
· Storage Validation
· System Configuration
We will look at what are exact components or sub-tests that takes place
during validation testing in another post.
First we need to go to administrative tools and then open Failover Cluster
Management Console
After selecting Failover Cluster Management option, it will open Failover
Cluster Management Console
We can right click on Failover Cluster Management and then select
“Validate a configuration” option to initiate Cluster validation wizard
As you can see below, first screen provides us with a description relating to
Cluster validation wizard.
Next we need to add what are the NODEs which we need to validate.
I will add my Cluster NODE01 and Cluster NODE02
After I select both NODEs, it gets added to my Cluster validation wizard
Next, we need to decide, whether are running all tests, or whether we are
going to run a selective test. For this demonstration, I will be running all
tests to validate NODE01 and NODE02
Next we have a confirmation summary screen. We can review what are the
tests which will be performed against NODE01 and NODE02
After I select next, validation wizard will start validation my cluster
configuration
once this is completed, it’s possible for us to view a report
We can open this report as a web page and we can see results.
In my next post, we will create a cluster and let’s see how we can configure
it
Creating a Windows Server 2008 R2 Failover Cluster
I hear you…you want your SQL, DHCP, Hyper-V or other services to be highly available for your
clients or your internal users. They can be if you create a Windows Failover Cluster and
configure those services in the cluster. By doing that if one of the servers crashes the other(s)
one will take over, and users will never even notice. There are two types of Failover Clusters:
active/active and active/passive. In the first one (active/active) all the applications or services
running on the cluster can access the same resources at the same time, and in the second one
the applications or services running on the cluster can access resources only from one node, the
other one(s) is/are in stand-by in case the active node is fails.
For now I just want to show you how to create an Active/Passive Windows Failover Cluster, as
for the shared storage I will use iSCSI since I can’t get my hands on a SAN, here at home. The
iSCSI target is from StarWinds, which is more than enough to create and test your Windows
cluster, so if you want to follow along a trial version is available for download at this page. To run
a Windows cluster an Active Directory domain needs to be present. For this guide all servers are
running Windows Server 2008 R2 Enterprise. You need either Enterprise or Datacenter edition,
because Standard edition does not support clustering. In the following table I wrote down the
cluster nodes network configurations.
Node1 Node2
Network 1 (LAN) – 192.168.50.10 Network 1 (LAN) – 192.168.50.11
Network 2 (iSCSI) – 10.0.0.10 Network 2 (iSCSI) – 10.0.0.11
Network 3 (Heartbeat) – 1.1.1.1 Network 3(Heartbeat) – 1.1.1.2
Domain member Domain member
I added a separate network card just for the iSCSI traffic, because I don’t want that traffic to get
on my LAN and “hurt” the switches. I recommend you do the same if you put this on a production
environment, if not, your LAN will suffer. Usually the Register this connection’s addresses in
DNS box should be disabled on the adapter protocol for the iSCSI and Heartbeat network, but
since these networks are completely separated from the LAN it’s OK to leave the box enabled,
they are not going to register anywhere.
After you configured the IP addresses on every network adapter verify the order in which they
are accessed. Go to Network Connections click Advanced > Advanced Settings and make
sure that your LAN connection is the first one. If not click the up or down arrow to move the
connection on top of the list. This is the network clients will use to connect to the services offered
by the cluster.
Verify connectivity on every network on every cluster node, using PING.
If everything is in order let’s go and configure the iSCSI target; I’m not going to show you here
how to install the software, because is straight forward, click Next a few times and you’re done.
On the StarWind console right-click StarWind Servers and choose Add Host.
Type the IP address or FQDN of your StarWind server. Since my server is on the same box as
the console I will just type the loopback address. Click OK to connect.
Once the StarWind server is added to the console, right-click it and choose Connect.
Type the credentials to connect to the server and hit OK. If you are using the default credentials,
like I am here, on the Login box type root and on the Password box type starwind. You can
modify this later, if you want to.
Once connected you should be able to see the Targets object in console.
Now we need to create the quorum disk so the cluster information sits somewhere and can be
accessed by all nodes in the cluster. Right-click Targets and choose Add Target.
Give the target a name and hit Next.
As storage type, choose Hard Disk.
If you want to use a physical disk attached to your StarWind iSCSI server go with the first option,
but for the simplicity of this example I’m going to create a virtual disk.
Select Image File device and click Next.
Choose the second option Create new virtual disk.
Provide the path where the virtual disk should sit, then type the size of the disk. Microsoft
recommends that quorum disks should be at least 500 MB in size, but I alway set it at 1 GB. If
you need more information read this Microsoft KB.
Here check the box Allow multiple concurrent iSCSI connection (clustering) then click Next.
Leave the default options here and continue the wizard.
On the Summary screen click Next to create the iSCSI virtual disk.
Here click Finish to close the wizard.
Now repeat the same steps to create a data disk, off course in a bigger size; mine has 10 GB.
The disk size depends on the services that will run in the cluster. For example, if you are running
a SQL server in the cluster you will need a disk bigger than 10 GB, and I’m sure you’re not going
to need just one. Anyway at the end you should have all your disks listed in the StarWind
console.
Now let’s take care of the cluster nodes, and we’ll start with the first one, Node1. To install the
Failover Cluster service click Start > Administrative Tools > Server Manager.
Right click Features and choose Add Features.
Check the Failover Clustering box and hit Next then Install.
Repeat this operation on the second node. Back on Node1 we need to connect those iSCSI
drives that we create earlier. For that go to Start > Administrative Tools > iSCSI Initiator.
Click Yes on the message that pops-up to start the iSCSI service. On the target box type the IP
address or FQDN of your iSCSI Target (the one where those virtual disks are sitting) then hit
the Quick Connect button.
On the Quick Connect window select each discovered targets and click the Connect button.
Now those drives should be visible in the Disk Management console.
Put those disks online, initiate them, format them using NTFS and assign a drive letter. At the
end they should look like this:
Do the same maneuver on the second node, but DO NOT PUT THE DISKS ONLINE; leave them
off-line, because corruption can occur. Now from Administrative Tools open the Failover
Cluster Manager. From the console right-click Failover Cluster Manager and choose Create a
Cluster. Usually the cluster needs to be validated, but since I’ve done this a few times I know it
will work.
In the Enter server name box type the name of your servers that participate in the cluster and
click the Add button.
Leave the default option to validate the cluster and click Next. In the Validate a Cluster
Wizard go with the defaults.
At the end you should see only green checks. If you have errors or warnings fix them before
continuing.
Back to the Create Cluster Wizard we need to type a name for this cluster (give it a name that
defines your service running on the cluster) and assign it an IP address. I recommend to use a
static IP address and not one assigned by DHCP. When the wizard starts creating the cluster it
also tries to create a computer account named after your cluster. If the account you are logged in
with on the server does not have admin rights in Active Directory, you need to contact the AD
team to create the computer account before you create the cluster.
Click Next to start creating the cluster.
When it’s done click Finish to close the wizard.
If you take a look in AD, you should see the cluster computer account created by the Create
Cluster Wizard. Again, this happened because I was logged in with a domain admin account
when I ran the wizard.
We created the cluster, but some things need to be done after. Verify that the wizard assigned
the correct drive to the quorum and only the LAN network adapter listens to clients. To verify the
quorum drive click the Storage object in the console. Looks like the wizard was smart enough to
know which drive to assign for the cluster quorum.
If you right-click the networks and choose Properties, only the LAN network should have the
option Allow clients to connect trough this network enabled.
As a tweak, I like to rename my cluster networks based on their purpose.
To verify that it works, shut down Node1 (since this is the active node right now) and see if the
quorum and data disks fail to the second node. Now you can start installing some cluster
applications like SQL, DHCP etc…I’ll show you how in a future guide, ’till then… cheers.
http://www.vkernel.ro/blog/creating-a-windows-server-2008-r2-failover-cluster
Installing Failover Clustering on Server
2008R2
Shabaz March 26, 2014 2 Comments
Failover Clustering, Microsoft, Test Labs, Windows Server, Windows Server 2008R2
Failover Clustering, Microsoft, Server 2008R2, Test Labs
This is a step-by-step guide on how to set up a failover cluster in Windows Server 2008R2, in a
test lab. A cluster is a group of independent computers that work together to increase the
availability of applications and services, such as file server service or print server service.
Physical cables and software connect the clustered servers (called nodes) so that if one fails
another can take its place.
To setup a failover cluster you need to have certain requirements met:
» The nodes in the cluster must have at least two NICs each. One for the production
environment, and one for heartbeat signal between cluster nodes.
» The nodes in the cluster must have access to shared storage, be it iSCSI, SAN, NAS etc.
» An AD domain, where the nodes must be members of the same domain.
» There are also other AD related requirements, but basically you just need a user account
with administrative privileges on the nodes, and Create Computer objects and Read All
Properties permissions in the container/OU that is used for computer accounts in the domain.
Software you will need to perform this exercise
» VirtualBox (or your virtualization Product of Choice)
» Windows Server 2008R2
» StarWind iSCSI SAN Free Edition
1. Installation of VirtualBox and Virtual Machines
1.1 Download and install VirtualBox, the installation procedure is quite straightforward, so I’m not
going to write in details about that. Now create three virtual machines, with 1GB ram and 20GB
disk each.
Server01: Domain Controller, will need 2 NICs
Server02: Cluster node, will need 3 NICs
Server03: Cluster node, will need 3 NICs
Give Server02 and Server03 three NICs each, while Server01 will only have two.
The NICs will serve the following purpose;
Production – The clients will connect to the cluster through this NIC
iSCSI – The nodes of the cluster will connect to the shared storage on Server01 through this NIC
Heartbeat – The heartbeat signal between the two nodes will be sent through this NIC
1.2 Mount the Server 2008R2 ISO on the three virtual machines, and install Server 2008R2
Enterprise edition (Full Installation). Failover Clustering feature is included in Enterprise
and Datacenter editions of 2008R2 only. You can install Standard Edition on Server01, if you
like, it does not make a difference.
1.3 Rename the three computers to Server01, Server02 and Server03 in Windows. Then give
them the following IP addresses,
<strong>Server01:</strong> Nic1, ip=192.168.0.10, subnetmask=255.255.255.0, DNS
1
preferred=192.168.0.10
2
Nic2, ip=192.168.1.10, subnetmask=255.255.255.0
<strong>Server02:</strong> Nic1, ip=192.168.0.20, subnetmask=255.255.255.0, DNS
1
preferred=192.168.0.10
2
Nic2, ip=192.168.1.20, subnetmask=255.255.255.0
3
Nic3, ip=192.168.2.20, subnetmask=255.255.255.0
<strong>Server03:</strong> Nic1, ip=192.168.0.30, subnetmask=255.255.255.0, DNS
1
preffered=192.168.0.10
2
Nic2, ip=192.168.1.30, subnetmask=255.255.255.0
3
Nic3, ip=192.168.2.30, subnetmask=255.255.255.0
1.4 Install the Guest Additions for VirtualBox, and restart the servers.
2. Installation of Active Directory
2.1 Install Active Directory on Server01.
Step-by-step guide on how to to create the first domain controller in a new Windows Server
2008R2 forest.
2.2 On Server01, open properties of NIC2 (the one designated to iSCSI), then browse to
properties of ipv4, click Advanced, make these changes
2.3 On Server 01, click Start → Administrative Tools → DNS → right-click Server01 →
Properties, and then make these changes
2.4 Join Server02 and Server03 to the domain.
3. Installation of StarWind iSCSI SAN Free Edition
3.1 Install StarWind iSCSI SAN Free Edition on Server01. You will use it to create iSCSI targets,
and shared storage. Each service/application in the cluster will require its own shared storage.
3.2 After having created a target and storage on Server01, connect Server02 and Server03 to
the target and the storage. Remember to use the 192.168.1.10 ip address of Server01 to
connect to the iSCSI target.
3.3 Initialize the disk, bring it online and create a single volume, formatted with NTFS on
Server02 only.
Step-by-step guide on how to to install StarWind iSCSI SAN Free Edition, create targets and
storage, and then connect to said targets and storage.
Here I have created a volume labeled Shared Storage, and assigned it the drive letter E: on
Server02.
4. Installation of Failover Clustering Feature
Install the Failover Clustering feature on Server02 and Server03. You can either create a new
domain account (remember the permissions it needs), or use the built-in Administrator account of
the domain, to log on to the servers.
4.1 Start Server Manager → Features → Add Features → Choose Failover Clustering → Next →
Install → Close
5. Validate and create the cluster
5.1 Click Start → Administrative Tools → Failover Cluster Manager
5.2 Right-click Failover Cluster Manager → Validate a Configuration
5.3 This will start the Validate a Configuration Wizard, click next
5.4 Click Browse
5.5 Write Server02;Server03, and then click Check Names, click OK
5.6 Click Next
5.7 Click Next → Next, the wizard will run all tests
5.8 When all tests are finished, you will be presented with a report. You can view the report, by
clicking View Report (obviously)
As you can see, I received some warnings on the network configuration of the servers. If you
click the Network link in the report, you can see a description of the warning.
Since this is not a production environment, but just a test lab, you can easily ignore all such
warnings of trivial matter. The other warning you will receive is about drivers being unsigned.
Once again, just ignore warnings of such trivial matters. All validation reports will be saved
at %systemroot%\Cluster\Reports, so you can view the reports any time you like.
5.9 Click Create the cluster now using the validated nodes
5.10 The Create Cluster Wizard starts, click Next
5.11 Type the name you want to assign to the cluster, and then give it an IP address. As you
remember, Nic1 was for the production environment, therefore untick the checkmark for the other
two Nics and only assign the cluster an address in the 192.168.0.0/24 segment, such as
192.168.0.50 for example. Click Next twice, and the cluster will be installed.
5.12 Click Finish on the Summary page, and you are presented with the cluster you just created
6. Configure Quorum Type on the Cluster
6.1 As you can see in the previous screenshot, the Quorum Configuration is Node and Disk
Majority. Since we currently have only one shared disk, we can not use this Quorum
configuration, because that disk is being used as Quorum disk. So lets just change it to Node
and File Share Majority.
6.2 First we must create a file share on Server01, and give it the appropriate share and NTFS
permissions. On Server01 create a folder called Cluster01, right-click it and choose properties.
On the Sharing tab, click Advanced Sharing, choose to share the folder and click Permissions.
Click Add
6.3 Click Object Types
6.4 Tick off for Computers
6.5 Write Cluster01, click OK
6.6 Give the Cluster01 Virtual Computer Object, Full Control permissions. Click OK twice
6.7 Now choose the Security tab, and give the Cluster01 Virtual Computer Object, Full control
permissions here as well
6.8 On Server02, in Failover Cluster Manager, right-click Cluster01, choose More Actions, and
then choose Configure Cluster Quorum Settings
6.9 This will start the Configure Cluster Quorum Wizard, click Next
6.10 Choose Node and File Share Majority
6.11 Write the path to the share we created in step 6.2, or browse to it, then click Next, Next
again, and Finish.
7. Configure Service or Application
7.1 For this test lab’s purpose, lets configure the File Server service on the cluster. But before we
can do that, we need to add the File Services role on both nodes.
7.2 Start Server Manager -> Roles -> Add Roles -> Add the File Services role -> On Role
Services, choose File Server and File Server Resource Manager -> Next -> Next -> Install
7.3 Now that the File Services role has been added on both nodes, head back to Failover Cluster
Manager
7.4 In Failover Cluster Manager, Right-click Services and applications, and choose Configure a
Service or Application
7.5 In the Wizards first page, click Next
7.6 Choose File Server, and then click Next
7.7 Give your new File Server a name and an IP address. This is the name clients will use to
connect to the file server
7.8 Select Available shared storage (this is the storage we set up in Starwind iSCSI SAN). Next -
> Next -> Finish
7.9 And thats it, you now have a clustered file server clients can connect to, and which will
automatically failover to the second node, if the first node fails.
7.10 If you start AD Users and Computer, you will see two virtual computer objects that have
been created for Cluster01, and Fileserver01