0% found this document useful (0 votes)
297 views

Introduction of iSCSI Target in Windows Server 2012

This document discusses the introduction of the iSCSI Target feature in Windows Server 2012. It provides instructions on how to enable and configure the iSCSI Target, which allows a Windows Server to share block storage remotely over Ethernet networks. The iSCSI Target is now integrated directly into Windows Server 2012 rather than requiring a separate download and installation. The document also lists several common uses for the iSCSI Software Target, including consolidating storage for multiple application servers, testing and development scenarios, setting up an iSCSI SAN for a Windows cluster, migrating server data to a Windows Storage Server, diskless SAN booting over iSCSI, and providing additional "bonus storage" for users in an organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
297 views

Introduction of iSCSI Target in Windows Server 2012

This document discusses the introduction of the iSCSI Target feature in Windows Server 2012. It provides instructions on how to enable and configure the iSCSI Target, which allows a Windows Server to share block storage remotely over Ethernet networks. The iSCSI Target is now integrated directly into Windows Server 2012 rather than requiring a separate download and installation. The document also lists several common uses for the iSCSI Software Target, including consolidating storage for multiple application servers, testing and development scenarios, setting up an iSCSI SAN for a Windows cluster, migrating server data to a Windows Storage Server, diskless SAN booting over iSCSI, and providing additional "bonus storage" for users in an organization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Introduction of iSCSI Target in Windows

Server 2012
The iSCSI Target made its debut as a free download for Windows 2008 R2 in April 2011,
since then, there were more than 60,000 downloads. That was the first step to make it
available for everyone. Now in Windows Server 2012, no more downloads and separate
installation; it comes as a build-in feature. This blog will provide step-by-step instructions
to enable and configure iSCSI Target.
If you are not familiar with iSCSI Target, it allows your Windows Server to share block
storage remotely. iSCSI leverages the Ethernet network and does not require any
specialized hardware. In this release we have developed a brand new UI integrated with
Server manager, along with 20+ cmdlets for easy management. The following references
also provide additional examples/use cases:

Six Uses for the Microsoft iSCSI Software Target


The iSCSI Software Target can be used in so many ways its like the Swiss army knife of
transportable storage. In this post I outline my favorite uses for an iSCSI Software Target, if you know
of other creative uses please let me know!
Windows Storage Server 2008 and the iSCSI Software Target 3.2 are available from original
equipment manufacturers (OEMs) in a variety of preconfigured storage appliances. Partners and
customers can test and develop on the platform without having to buy a fully configured appliance
from an OEM by getting an evaluation copy on MSDN or TechNet. See Joses blog for information
about that.

A NAS refers to servers that communicate over a network using a file protocol (like CIFS, SMB or
NFS) and a SAN refers to a network that connects application servers to dedicated storage devices,
using protocols like Fibre Channel or iSCSI to transport SCSI 'block' commands and data. NAS
appliances control the file system, while in a SAN, the raw storage is exposed to the application
servers and users can partition and format it using whatever file-system the client supports. Blockstorage devices make it appear to the servers that the storage is a locally-attached hard disk drive.
When you take a Windows Storage Server and you add iSCSI functionality, it becomes a NAS-SAN
hybrid device. This has become known in the industry as a unified storage appliance. Either way, it
is remote storage and it is going to be a big part of the future.

What is the Microsoft iSCSI Software Target?

This is the simplest way to understand the Microsoft iSCSI Software Target. Remote VHD files appear on the
application server as locally attached hard disks. Application servers running just about any workload can
connect to the target using an iSCSI initiator.

How do servers and clients access the iSCSI Software Target or file shares on a
Windows Storage Server?

In a typical network, administrators have separate iSCSI networks from user-accessible file-protocol networks
for SMB, SMB 2.0, NFS or CIFS traffic. Windows Storage Server allows a diverse mix of heterogeneous clients
or servers to access data.

Six Uses for the Microsoft iSCSI Software Target


Before we get started I want to call out that there are very different hardware requirements
between the development, testing or demonstration scenarios compared to a production
scenario. It is a good idea to follow the guidance from your OEM on the types of workloads
they support for every configuration.

1) Consolidate storage for multiple application servers.


Why are people still cramming hard-disks into application servers? Either they dont know
they can consolidate their storage into a SAN or they have a range of factors that compel
them to use Directly Attached Storage (DAS). Using iSCSI makes it easy since it uses the
same network cards, switches and cables that your Ethernet provider has been selling for
years. The rise of huge data storage requirements has coincided with the biggest drop in the
cost per GB the world has ever seen. However, the battle is still lost if you have to keep
adding disks and backing up each server separately. Having a dedicated storage box to
service groups of application servers not only makes it easy to provision storage, it also
allows you to have only one critical system to backup and recover when disaster strikes.
How much load can it handle? Well, it depends. It depends on the system specs, network
cards, network switches, the RAID card, storage drivers, the number of spindles and the IOPS
that the workload is generating against the storage. If you look at a notoriously I/O-heavy
Exchange Server 2003, some ESRP submissions show solutions that use the iSCSI Software

Target. One solution, the HP AIO 1200 has a configuration with 12 disks that supports 1500
exchange users. If you added another 12 disks and dedicated NIC, you could probably
support another exchange server in the same configuration. Contrasting this to a low-I/O
workload like an intranet web server you could easily host and consolidate the storage for
dozens of them.
The key takeaway here is that you dont want to oversubscribe any part of the system to more
application servers than the storage server can handle. Testing and validating the system at
peak workloads before deploying into production is an important best practice.
2) Test and Development scenarios are endless, especially for Clustering, Live
Migration, SAN transfer and Storage Manager for SANs.
Setup an iSCSI SAN for a clustered SQL Server on a single laptop! Testing SANenvironments without a SAN or creating a killer demo of a solution running on a single
laptop is actually easy to setup with Hyper-V. See the virtualization section below for some
drawings that outline different options to get storage to Hyper-V virtual machines. You can
test these SAN technologies for $0 out of pocket on industry standard hardware using
Windows Storage Server 2008 and the iSCSI Software Target. Being able to do proof-ofconcept testing with the semantics of a high-availability application attaching to a SAN
without spending a ton of money on a SAN is a big plus.
The bare-minimum configuration is a single-CPU server with a SATA drive and an Ethernet
port. This is great for testing in a developers office, but it will not meet most workload
requirements for data throughput and high availability. Certainly not for production.
If you want to test the throughput of a solution and remove the spinning disks as a bottleneck
you could also use the RAMDISK command:
To create a VHD in system memory, use RAMDISK:<size-in-MB> for the device path.
For example, to create two 100MB VHDs in memory, use the following device paths:
RAMDISK:100 for the first VHD
RAMDISK:101 for the second VHD (we enforce device path uniqueness, so you need to
add 1 to the size to make it unique)
Note: This is an undocumented and unsupported command, but it is useful.
3) Setup an iSCSI SAN for a Windows cluster.
The Microsoft iSCSI Software Target supports persistent reservations so that your storage
resources can failover from one cluster node to another. The Microsoft iSCSI Software Target
supports both SCSI-3 and SCSI-2 reservation commands. Fibre Channel and SAS interfaces

and the associated fabric/switches dominate the market for cluster shared storage today.
Using iSCSI Targets to back a cluster is an option and when you use a nice RAID card with a
bunch of fast drives, you will get much better performance. In an upcoming post we will have
a detailed setup document outlining how to create clustered storage servers with some great
recommendations for highly available file servers and iSCSI Target clusters.
4) Consolidate servers into Hyper-V VMs and migrate the data to a Windows Storage
Server.
Finance dept. wants another server to run their LOB application, but you are out of servers?
Here is one quick solution, convert one of your servers to a Hyper-V server and create several
VMs. After migrating the server instance to a VM, create an iSCSI LUN on a Windows
Storage Server and attach it to the VM and migrate the data to the new LUN. Enable Hyper-V
guests to migrate from one host to another and quickly transport the LUNs from one to
another using SCVMM. Hyper-V and iSCSI storage servers go together like PBJ (Thats
peanut butter and jelly).
5) Diskless SAN boot over iSCSI!
Ok, now we are getting somewhere. While we were just celebrating not putting data disks in
all these servers, why not remove all the disks? You can boot from an iSCSI SAN! Imagine
your datacenter blades humming along without a bunch of spinning platters! Not only does
this save big bucks in hard-disk costs, but it also reduces your power consumption. iSCSI
booting has been possible since the 2003 release of the iSCSI Initiator. If you want to boot a
physical server off of iSCSI, you need an iSCSI-boot capable NIC like the Intel PCI-E
pro/1000 PT or the Broadcom BCM5708C NetXtreme II GigE or you can use an iSCSI HBA
like the Qlogic QLE 4062c.
If you want to boot Hyper-V VMs off iSCSI, you could make the connection in the parent OS
using the iSCSI initiator and then carve up storage for the VMs, but if you want to boot
directly off of iSCSI, you will need a 3rd party solution like DoubleTakes NetBoot/i or
gPXE, which is an open-source bootloader.
Windows doesnt care that there are no hard-disks in the box, as long as the network can
handle it. Checkout the iSCSI Boot Step-by-Step Guide for more information.
6) Bonus storage for people in your organization. Storage Administrators can be a
hero! (for once).
Did you know that you can setup an iSCSI Target with some drives and carve-up and handout the storage to people running Windows Clients? The iSCSI initiator is built into every
version of Windows, you can quickly provision storage and assign it to just about anybody.
Our storage guru recently sent out an email to everybody in the team that said, get 20GB of
storage that will be backed up each week, just send me your IQN (Control Panel > iSCSI

Initiator) and I will grant you access to your personal, private storage. That is pretty cool,
especially when you run out of space or you need a place to backup some files.

Topologies of Common Configurations


Here is a simple configuration. The storage backend usually refers to an array of disks in a
RAID configuration, but it could also be a JBOD (just a bunch of disks). It could be as small
as a single SATA disk attached to the motherboard, or it could be a rack of 1000 Fibre
Channel drives in a RAID configuration with redundant host bus adapters (HBAs) in the
storage server.
There is no limit to how much you can spend on a backend storage array that meets your
needs for high availability, I/O bandwidth or advanced features like array-based replication.
There are no vendors that I know of that are attaching the disks directly to a SATA controller,
at a bare minimum people usually use some sort of RAID controller to get the optimal I/O
and data protection requirements.

Here is a simple configuration using redundant networking. Multipathing using the Microsoft
MPIO framework is recommended to ensure redundancy and maximum throughput. See this
recent Multipath I/O Step-by-Step guide for details. Many storage arrays that are SPC-3

compliant will work by using the MPIO Microsoft DSM. Some storage array partners also
provide their own DSMs to use with the MPIO architecture.

Here we have a high-availability configuration of storage servers. If one of the machines


sucks in a bee that shorts out the motherboard, the other machine will pick-up and the
application server doesnt have to know the storage server went down.

Now we are getting close to Nirvana. Here we have a high-availability configuration of


storage servers and redundant networking paths. Now storage servers and network switches
can fail and service continues without interruption.

Ok, time to cluster the front-end servers. Now we have a highly-available configuration of
application servers and another cluster for the storage servers.

Now lets talk about using all of it together: clustered front-end (application servers) and/or
back-end (storage servers) along with MPIO. MPIO path failover times can be impacted by
the number of LUNs and the amount of I/O being generated, so make sure you test a fully
configured machine running peak I/O before moving it into production.

We tested failover on various configurations at Microsoft with MPIO while the servers were
being hammered with I/O using Jetstress or IOMeter. Using the inbox Microsoft DSM, we
see good fail-over performance while using two 2-node application server clusters (running
Windows Server 2008) with 32 LUNs for each cluster (a total of 64 LUNs). The key here is
that the high availability failover must be quick enough to support application servers that
throw a fit if the disk stops responding.
When using MPIO in these advanced configurations, the iSCSI Software Target team
recommends using Windows Server 2008 initiators.

Microsoft iSCSI Software Target


Multipath and Single-path Support Matrix
The following tables define tested limits and support for using the Microsoft iSCSI Software
Initiator with a single network path or multipath (MPIO) when connecting to the Microsoft
iSCSI Software Target in clustered and non-clustered environments.

*There is limited support for Windows Server 2003 iSCSI hosts when connected to the
Microsoft iSCSI Software Target if the iSCSI hosts or iSCSI Targets are clustered. Failures
on the iSCSI network path may result in delayed failover and recovery times. Failures in nonnetwork related areas have been tested with acceptable recovery times. The time to complete
a failover and recovery may vary and is dependent on the application IO workload at the time
of failure.
Microsoft strongly recommends the use of Windows Server 2008 iSCSI hosts for clustered
configurations when connecting to the Microsoft iSCSI Software Target.
Note: This above is specific to Microsoft iSCSI target configurations. Customers using
Windows Server, the Microsoft iSCSI software initiator and a logod iSCSI hardware array
should refer to the storage array vendor support statements for applicable supported
configurations.

Virtualization and iSCSI


Three ways to expose iSCSI LUNs to Hyper-V Virtual Machines
Here is a cool diagram that shows three different ways to get storage to a VM. See the
Storage options for Windows Server 2008 Hyper-V blog post for a complete breakdown.
Checkout the Hyper-V Planning and Deployment Guide for deployment best practices.

I hope this post was helpful and gives you some ideas on different ways to use your new NAS
device and the Microsoft iSCSI Software Target.
========================================================
=============

Step-by-step: Using the Microsoft iSCSI Software Target


with Hyper-V (Standalone, Full, VHD)
Rate This
5

Jose Barreto - MSFT

Jose Barreto - MSFT


Microsoft Corporation

65,847 Points 8 3 3
Recent Achievements
Forums Answerer II Blogs All-Star Blog Commentator II
View Profile

2 Feb 2009 4:00 PM

Comments 8

Overview
In this post, I will show all the steps required to run Windows Server 2008 Hyper-V with the
Microsoft iSCSI Software Target. We will cover the specific scenario of a standalone Windows
Server 2008 server (as opposed to a clustered one) on a full install (as opposed to a core
install) and using a VHD file (as opposed a pass-through disk).
In order to follow these instructions you will need at least two computers. One computer will
run a full install of Windows Server 2008 with the Hyper-V role enabled. The other computer
needs to be a Windows Storage Server (WSS) with the iSCSI pack or Windows Unified Data
Storage Server (WUDSS). Optionally, you could add a Client for your Virtual Machine and a
computer for remote Hyper-V Management.
Configuring the Networks
For your server running Hyper-V, you should consider having at least three Network Interface
Cards (NICs). One will be dedicated to iSCSI traffic. The second will be connected to the Virtual
Switch and used for traffic going to your virtual machine. The third NIC you will dedicate to
remote management. This configuration is showed in the diagram below:

Checking the Windows Storage Server


WSS (with the Microsoft iSCSI Software Target) comes preinstalled from the hardware vendor.
This special OS release is not available the Microsoft sales channels like software retailers or
volume licensing. You can find more information about WSS and WUDSS at
http://www.microsoft.com/storageserver. Windows Storage Server 2008 is also available from
MSDN or TechNet subscriber downloads for non-production use (see details at
http://blogs.technet.com/josebda/archive/2009/05/13/windows-storage-server-2008-with-themicrosoft-iscsi-software-target-3-2-available-to-msdn-and-technet-plus-subscribers.aspx).

You should make sure you have the proper credentials (username and password) with
administrator privileges on the Storage Server. You should also make sure you have remote
access to the Storage Server via Remote Desktop. Once you log on to the Storage Server via
Remote Desktop, verify that you can locate the Microsoft iSCSI Software Target Management
Console (MMC), which can be found in the Administration Tools menu. From a Storage Server
perspective, well perform all the configuration actions using the iSCSI Target MMC.

Checking the Server running Hyper-V


On the server running Windows Server 2008 Hyper-V, you should make sure to run Windows
Update to get the latest updates. This will ensure that you have the final release of Hyper-V,
not the beta version that was released with Windows Server 2008.
You will also need to enable the Hyper-V role. This is done using Server Manager by rightclicking the Roles node on the tree on the left and selecting Add Roles.

This will bring up the Add Roles Wizard, where you will find Hyper-V on the list of roles:

While configuring the Hyper-V role on the wizard, you should see the three (or more) NICs on
your server on the Create Virtual Networks step.
Make sure you do not select the NICs used for iSCSI traffic and Hyper-V remote management
in the Create Virtual Networks.

You will need to restart the server after you add the Hyper-V role.
Loading the iSCSI Initiator
The next step now is to configure the iSCSI initiator on the Hyper-V server.
You can find the iSCSI Initiator under Administrative Tools in Windows Server 2008. You
can also find it in the Control Panel.
The first time you load the iSCSI initiator, it will ask you two questions.
The first question is about loading the Microsoft iSCSI Initiator service every time:

The second question is about configuring the firewall to allow the iSCSI traffic:

You should click on Yes for both questions.


After that, the iSCSI Initiator Properties windows will load, showing the General tab.
This tab gives you an important piece of information: your initiator name or IQN. Well need
this later when configuring the target:

Configuring the target portal


The next step is configure the initiator with the address of your iSCSI target portal.
In our case, this is the computer running Windows Storage Server and the Microsoft iSCSI
Software Target.
In the iSCSI Initiator Properties window, select the Discovery tab and add the IP address of
the Storage Server to the list of Target Portals.

Click on Add Portal to add the information. You will need the IP address of your Storage
Server at this point. Port 3260 is the default.

Heres the screen after the Target Portal is added:

Now, if you switch over the Targets tab of the iSCSI Initiator Properties windows, you will see
this:

This blank list of targets is expected at this point, since we havent configured any targets yet.
Well do that next.
Creating the iSCSI Target
Now we switch over the Microsoft iSCSI Software Target side, on the Windows Storage Server.
We will create the target using the Microsoft iSCSI Software Target MMC we mentioned before.

After starting the wizard, skip the introduction page by clicking Next.

Next, you will provide the name and description for the target. Well be using simply T1 for
the name.

On the following screen, you need to provide the identification for the target.
Here you can use an IQN (iSCSI Qualified Name) or you can use the advanced setting to go
with an IP address, DNS name or MAC address.

Since our initiator in this case already contacted the Storage Server, you can simply click on
Browse and pick the IQN from there.

Once you get the right IQN, click Next to proceed.

Finally, click Finish to create the target.

Adding LUNs to the iSCSI Target


Now that the target is created, you need to add virtual disks or LUNs to it. These will be the
logical units that will be presented to the initiator.
You will do this by right-clicking the target T1 and selecting the option to Create Virtual Disks
for iSCSI Target.

You will start the wizard. Click Next on the introduction page.

Next, you will provide a path to the file to use as your virtual disk or LUN. This file will have a
VHD extension.

Next, you will specify the size for the virtual disk or LUN. Well create a 20GB LUN here, which
is enough to install Windows Server 2008 later on. The iSCSI target uses fixed-sized VHD files,
but you can extend them if needed.

Next, you will specify a description for the virtual disk or LUN.

Finally, click Finish to create the virtual disk. Depending on the size, it could take a while.

At this point, you can see the target and its virtual disk on the Microsoft iSCSI Software Target
MMC:

You can check the properties of the target, including the target IQN, by right-clicking the
target name and clicking on Properties.

Now we go back to the initiator side.


Configuring the iSCSI Initiator targets
When we last checked the Targets tab of the iSCSI Initiator Properties windows, we had an
empty list.
With the target properly configured, you should see the it showing after you click on
Refresh:

Now you need to click on Log on to connect to the target.


On the Log On to Target window, be sure to check the box to Automatically restore this
connection when the computer starts.

Once you log on, the target status will change to Connected.

The LUN should also appear in the list of Volumes and Devices in the iSCSI Initiator
Properties:

Now we need to work on that LUN to turn it into an NTFS volume with a drive letter.
That is done in Disk Management.
Preparing the Volume
If you followed all the steps so far, you should already have the LUN as an offline,
uninitialized, unallocated volume in Server Manager, under Disk Management:

The first thing you need to do here is to online the volume, by right-clicking on the disk:

The volume will be onlined automatically if you are running the Standard Edition of Windows
Server 2008.
After that, the volume will be online, but still uninitialized. You will then select the option to
Initialize Disk:

At this point you need to select a partition style (MBR or GPT). The older MBR style is
commonly used for small partitions. GPT is required for partitions larger than 2TB.

After this, you have a basic disk online which you could use to create an NTFS volume. If you
right click it again, there will be an option to create a New Simple Volume.

Once you go through that wizard, format the volume and assign it a drive letter, you will have
the final result in Disk Management as drive E:

Well use this drive E: as our storage for Hyper-V.


Creating the Virtual Machine
Last but not least, we must now create our Virtual Machine. Well do this in the Hyper-V
Manager:

There are two places in the New Virtual Machine Wizard where you will refer to the E: disk.
The first one is when you select the location of your virtual machine configuration files:

The second one is when you specify the location of the virtual hard drive used by that virtual
machine:

In this case, by using the wizard, we selected the default option of using a Dynamically
Expanding VHD file that is exposed to the child partition as Virtual IDE.
You can verify that looking at the settings for the resulting Virtual Machine:

If you click on the Inspect button, you can see its a Dynamic VHD:

You could, of course, use any of the other types of VHD files or even a pass-through disk, but
thats a topic for another blog post
Conclusion
I hope this blog post has helped you understand all the steps required to use the Microsoft
iSCSI Software Target to provision storage for your Windows Server 2008 server running
Hyper-V.
This post covered a scenario where Hyper-V runs on a full install of Windows Server 2008,
using a VHD file on the parent and without Failover Clustering.
========================================================
============

Diskless servers can boot and run from the Microsoft iSCSI Software Target using
a regular network card!

Scott M. Johnson

Scott M. Johnson
MSFT

1,427 Points 3 1 0
Recent Achievements
Blogger II New Blogger New Blog Commentator
View Profile

3 May 2011 5:17 PM

Comments 8

The new Microsoft iSCSI Software Target 3.3 includes support for a new feature called differencing virtual hard
disks (VHDs). This feature helps deploy diskless boot for servers running Windows, especially in a Windows
HPC Server 2008 R2 compute cluster environment.
We worked closely with the HPC team to deliver a simple management experience. The deployment process is
tightly integrated within the Microsoft HPC management pack, which manages the iSCSI target using an HPC
provider. Support for differencing VHDs and iSCSI boot in iSCSI Software Target 3.3 is useful in other
deployments beyond HPC and this post will focus on how to deploy iSCSI boot outside of the HPC
environment. You can get more details about how this works in this blog post.
Advantages of using diskless boot include:

1. Fast deployment: A single golden image can be used to deploy many machines
in parallel. At Microsoft we tested deployments as large as 256 nodes in 30
minutes.

2. Quick recovery: Since the operating system image is not on the server, you
could simply replace the blade and point to the old remote VHD, and boot from it.
No operating system installation is required.

3. Cost reduction: Because many servers can boot off of a single image, CAPEX
and OPEX are directly reduced by having lower storage requirements for the
operating system images. This also reduces the power, cooling, and space
required for the storage.
This post will explain how diskless boot works and how you can try it out yourself!

Terminology
Before going any further, lets clarify a few terms:
Client: The server that will boot from the image stored on the iSCSI target server. It can also be referred as an
iSCSI initiator or a diskless node. Note: Since iSCSI boot is only supported on Windows Server SKUs, the
term Client refers to the iSCSI client machine which runs Windows Server OS.
Golden image: A generalized (sysprepd) image containing an operating system and any other required
applications. It is prepared on an iSCSI LUN and set to be a read-only master image. A section below describes
one way to create the golden image.
HPC: High-performance computing (see here).
iSCSI target: The end point where the iSCSI initiator will establish the connection. Once the initiator connects
to the iSCSI target, all the VHDs associated with that target will be accessible to that initiator.
iSCSI Software Target: Software application which provides iSCSI storage to clients (iSCSI initiators).
Differencing VHD: One of the supported VHD formats used by Microsoft iSCSI Software Target. (For a
definition, please see here.) When using diskless boot, the clients will read from the golden image and write to a
differencing VHD. This allows multiple clients to boot off of the same golden image.

Boot loader: Refers to the software that can bootstrap the operating system. In the iSCSI boot scenario, it
contains a basic iSCSI initiator that can connect to the iSCSI target and mount a disk. It is an alternative solution
to an iSCSI boot-capable NIC/HBA.

Overview
There are two phases when managing diskless clients:

1. Deployment: Enabling a diskless physical server (bare metal) to boot Windows,


involves:
o

Preparing a golden image. If you are planning to reuse the image, you can
store it in a folder, and copy it out later when you do the deployment.

Create a one or more differencing VHDs, and set the golden image as the
parent VHD

Create an iSCSI target for each differencing VHD and assign the target to
the client.

Power on the client.

Final operating system customization This is the Windows setup process.


It can be automated using an unattended setup file.

2. Boot process: This is the normal machine boot process, that is handled by the
iSCSI boot capable hardware or software. (For diagrams on how the boot process
works, see the boot process sections below).
See this TechNet article on iSCSI Boot which covers the more information on this topic. It includes a step-bystep guide for deployment using options 1 & 2 mentioned below.

Hardware/Software options
You will need one of the following options to enable iSCSI boot on a physical client:

1. An iSCSI boot-capable NIC or


2. An iSCSI boot-capable HBA or
3. A software-based iSCSI boot loader

An iSCSI boot-capable NIC


The NIC needs to be configured in the server BIOS to provide the iSCSI target IQN, iSCSI target IP address,
and credentials if set (examples of this type include Intel or Broadcom).

Figure 1 - iSCSI Boot capable NIC configuration

An iSCSI boot-capable HBA

An iSCSI boot capable HBA requires similar information to the iSCSI NIC. The follow example shows a
QLogic HBA.

Figure 2 - iSCSI boot capable HBA configuration

An iSCSI boot loader

A software boot loader needs more configuration and details are in the workflow section of this post. Examples
of a boot loader are gPXE (open source) and Doubletake Netboot/I (commercial). This post will use the gPXE
as an example. You can also use iPXE instead of gPXE.

Boot Process
There are two stages during machine boot up:

1. Pre-boot
2. Windows boot
The pre-boot phase can be executed by any of the three options described above (NIC, HBA, or software boot
loader).

For hardware, the boot loader is built in the firmware of the NICs or HBAs, and it
can connect directly to the iSCSI Target and mounted to the assigned disk
containing the Windows operating system image.

With a software boot loader, you can put the boot loader image on a CD or a USB
drive, and have the computer boot to the device. Once the computer logs on to
the iSCSI target, it enters the Windows boot phase.

The following diagram describes the components involved on the client computer during the boot process:

Figure 3 - Boot process components


When the pre-boot loader starts, it loads a real-mode network stack driver. The loader contains an iSCSI initiator
which logs on to the iSCSI target and mount the boot disk to the system. Another function of the loader is to
populate the iSCSI Boot Firmware Table (iBFT), which is required for iSCSI boot. The Boot Parameter driver in
Windows will load the parameters from the iBFT, and the Microsoft iSCSI Software Initiator will be able to
connect to the iSCSI target using the parameters set in the iBFT. The importance of the iBFT is to be able to

share the parameters between the iSCSI boot initiator (which establishes the session in the preboot phase) and
the Microsoft iSCSI initiator (which establish the session after Windows boots).

Various deployment configurations


If you are using hardware NICs or HBAs, please visit this TechNet page for a different approach for
deployment.
Once you have a hardware or software solution selected, you need to think about whether you want to leverage
DHCP and TFTP server for the deployment:
Boot loader

Using DHCP for deployment

Hardware (NIC or
HBA)

If yes, see your vendors documentation

If not, see Using

Software loader
(gPXE)

If yes, see Use a

If not, see Using software boot


loaders without DHCP and TFTP
server

Software boot loader with


DHCP and TFTP server

an iSCSI
boot capable NIC or HBA

Using an iSCSI boot-capable NIC or HBA


Using a hardware solution is easy to deploy, you just configure the firmware with the iSCSI target IQN and IP
address (see Figure 1 above), then power up the client. The basic iSCSI initiator built into the firmware will be
able to connect to the target and mount the VHDs.
Since it doesnt use a software boot loader, it doesnt need additional infrastructure such as TFTP server in the
deployment. The downside is that you need to acquire the hardware, and configure each one manually. To find
out how to automate the configuration, you will need to contact the hardware vendor.
Required hardware:

Client machine with iSCSI boot-capable NIC or HBA

Server running the iSCSI Software Target

The boot process


Once the client NIC has been configured, the machine will boot as illustrated below:

Figure 4 - Boot process using iSCSI Boot capable hardware

1. When the client machine boots up, it reads the Target IP and IQN information, and
the iSCSI NIC/HBA connects to the iSCSI target.

2. The iSCSI Target accepts the connection, and presents a VHD as a disk to the
client. The disk is then mounted on the client.

3. The boot process proceeds as if the image resides on a local disk. Once Windows
starts up, it starts the Microsoft iSCSI initiator which uses the parameters specified
in the iBFT table to connect to the target. You can also find more about iBFT here.

Using a Software boot loader with DHCP and TFTP servers


A TFTP server hosts the boot loader in a central location. You can configure a DHCP server to provide each
client machine a unique target. When a client boots up, it acquires the IP, TFTP server location, as well as
information about the target to connect to from the DHCP server. Using DHCP in combination with the TFTP
server will reduce the management operations. This is the approach used in Windows HPC diskless cluster
deployments. The HPC management pack guides users to configure the DHCP and TFTP servers in a simplified
manner. This configuration can scale higher with automation support. The hardest part is to configure the DHCP
server. The configuration needs to adhere to the software loader specification. To get more information on how
to configure the DHCP server, please visit here.
Required hardware:

Client machine with a NIC that supports PXE boot

Server with the iSCSI Software Target installed

Server with the DHCP and Windows Deployment Services roles installed (WDS
contains the TFTP server feature)

The boot process


Once the configuration of the DHCP server is done, and the boot loader has been copied to the TFTP server, the
machine boot sequence follows this process:

Figure 5 - Boot process when using DHCP, TFTP server with software boot loader

1. The client machine is powered up with PXE boot enabled. It requests the IP
address, the TFTP server location, as well as iSCSI target connection information,
from the DHCP server. (Note, most computers today support PXE boot.)

2. The DHCP server responds with all the information.


3. The client machine contacts the TFTP service for the boot loader.
4. The boot loader is copied to the client machine which has a basic iSCSI boot
initiator.

5. The client machine uses the basic iSCSI boot initiator to log on to the iSCSI target.
Once the connection is established, the disk is mounted on the client machine.

6. The boot process proceeds as if the boot image is on a local disk. Once Windows
starts up, it loads the Microsoft iSCSI initiator which uses the parameters specified
in the iBFT table to connect to the target.

Using a Software Boot Loader without DHCP and TFTP


In a test environment, it may not be feasible to configure a DHCP and TFTP server. Without a TFTP server, you
need to find another way to get the boot loader image presented to the client machines. You can put the boot
loader on a USB drive or a CD in the client machine. This is the simplest way to try out and does not require any
extra hardware investment. The step-by-step guide below focuses on this configuration.
Required hardware:

A client machine with a NIC.

A device (CD/USB, etc.) that can store the software boot loader

A server with the iSCSI Software Target installed

The boot process


Once the software boot loader has been configured, the machine boot will follow this process:

Figure 6 Boot process for local software boot loader

1. Set the client BIOS boot order to use the USB/or CD where the boot loader is
stored.

2. When the client machine boots, the boot loader uses the iSCSI target IP and IQN
information, and connects to the iSCSI target.

3. The iSCSI target accepts the connection and presents the VHD to the client. At
this point the disk will be mounted on the client.

4. The boot process proceeds as if the boot image resides on a local disk. Once
Windows starts up, it loads the Microsoft iSCSI initiator. The iSCSI initiator will use
the parameters specified in the iBFT table to connect to the target.

Step-by-step deployment guide


Here is the recipe for using a software boot loader on a CD/USB.
There are two stages:

1. Create a golden image


2. Deploy diskless clients using a golden image
After stage 2, the client machine will be ready to run a workload. You can use the machine just like one that has
an OS image stored locally.

A word of caution: While the iSCSI target is servicing diskless clients, rebooting the iSCSI target server will
cause unexpected behavior for the diskless clients. This is similar to removing the local hard drive while the
machine is running. If you require high availability in your environment, you should consider deploying a
failover cluster with the iSCSI targets LUNs.

Create a Golden Image


To create a golden image, you need to do a normal OS and application install on the iSCSI VHD, to ensure the
setup process can load the driver stacks in the correct order. This is important, because the Windows setup
process goes through different path for installing the OS on local hard drive vs on a iSCSI disk. The image
installed on a local hard drive can not be used for iSCSI boot, as the drives loading sequences will be different.
Once the installation is complete, run sysprep on the OS image.

Steps to create a golden image:


On the iSCSI target server:

Create a 30GB fixed VHD (big enough to install the OS and applications).

Create an iSCSI target.

Assign the VHD to the target.

Assign the target to the client iSCSI IQN.

On the client machine:

Configure the boot loader:


o

For iSCSI boot NICs or HBAs: configure the NIC or HBA to point to the
correct target.

For software boot loaders: Configure it if DHCP or TFTP is not available.


You can get a copy of a software boot loader (gPXE or iPXE) from :
http://etherboot.org/wiki/index.php or http://ipxe.org/download
The image needs to be stored on CD/DVD or USB with the following custom
script. You can use the Rom-o-Matic.net web site to generate custom boot
images, and copy the images to the media. www.etherboot.com has a list
of tools you can use to copy images onto various media.

Set the boot order to boot from gPXE media.

Set the system BIOS boot order to boot from OS installation media second.

Boot the disk client.

gPXE will fail because there is no OS to load, but the iSCSI Connection will be
established. (See the scripts below for an explanation.)

Choose the OS installation media to boot (this leads to a normal OS setup).

Select the iSCSI LUN to install the OS and finish OS installation normally.

Now you have an OS installation on an iSCSI LUN.

Optional: Install additional applications which you want in the sysprep image.

Sysprep the image of the first client machine. (Find more details on sysprep here.)

Use the syspreped image as the parent VHD for future deployment

(Note, I used this page as reference when I did my setup. You may find it useful as well.)

Custom Script
dhcp net0
set initiator-iqn iqn.1991-05.com.microsoft:iscsiboot-${net0/mac}
set root-path iscsi:10.121.28.150::::iqn.1991-05.com.microsoft:testsvr
-iscsiboot-${net0/mac}-target
set keep-san 1
sanboot ${root-path}

How to customize the boot loader script


The above script needs to be customized to your configuration:

Line 1: Assume booting from first NIC

Line 3:

Replace 10.121.28.150 with the actual iSCSI Target server address.

Replace testsvr with the actual iSCSI Target server host name.

Use IPv4, since the current version of the gPXE doesnt support IPv6.

gPXE will use the standard iSCSI TCP port 3260 and LUN 0.

Line 4 : The command set keep-san 1 means keep iSCSI connection when failure
occurs. By setting this, you will be able to install the OS image onto the iSCSI LUN
when creating the golden image.

Deploy diskless clients using a Golden image


Once the golden image is prepared, you can use it to deploy more clients by following the steps below:

On the target server:


o

Create a differencing VHD using WMI scripts (see this sample), and specify
the base VHD which contains the golden image which is the sysprepd OS
images created from the above section.

Create an iSCSI target, and assign it to the client.

Assign the differencing VHD to the target.

On the client machine:


o

Boot the client machine. Because the base OS contains the image, it will
load Windows, and continue with the OS finalization phase, where all of the
unique information is saved to the differencing VHD.

P.S. Now you are done, enjoy your new diskless servers!

========================================================
=============
Note: the instructions from the above references are for previous release, they are
not applicable on Windows Server 2012. Please use the instructions provided in
this blog instead.

Overview
There are two features related to iSCSI Target:
The iSCSI Target Server is the server component which provides the block storage to
initiators.
The iSCSI Target Storage Provider (VDS and VSS) includes 2 components:
o VDS provider
o VSS provider
The diagram below shows how they relate to each other:

The providers are for remote Target management. The VDS provider is typically installed on a
storage management server, and allows user to manage storage in a central location using
VDS. VSS provider is involved when application running on initiator is taking application
consistent snapshot. This storage provider works on Windows Server 2012, for version
support matrix, please go to the FAQ section.
As it shown on the diagram, the iSCSI Target and Storage providers are enabled on different
servers. This blog focuses on the iSCSI Target Server, and will provide instructions for enabling
iSCSI Target Server. It is similar in UI to enable the Storage providers, just be sure to enabling
it on the application server.

Terminology

iSCSI: it is an industry standard protocol allow sharing block storage over the Ethernet. The
server shares the storage is called iSCSI Target. The server (machine) consumes the storage
is called iSCSI initiator. Typically, the iSCSI initiator is an application server. For example,
iSCSI Target provides storage to a SQL server, the SQL server will be the iSCSI initiator in this
deployment.
Target: It is an object which allows the iSCSI initiator to make a connection. The Target keeps
track of the initiators which are allowed to be connected to it. The Target also keeps track of
the iSCSI virtual disks which are associated with it. Once the initiator establishes the
connection to the Target, all the iSCSI virtual disks associated with the Target will be
accessible by the initiator.
iSCSI Target Server: The server runs the iSCSI Target. It is also the iSCSI Target role name in
Windows Server 2012.
iSCSI virtual disk: It also referred to as iSCSI LUN. It is the object which can be mounted by
the iSCSI initiator. The iSCSI virtual disk is backed by the VHD file. For the VHD compatibility,
refer to FAQs section below.
iSCSI connection: iSCSI initiator makes a connection to the iSCSI Target by logging on to a
Target. There could be multiple Targets on the iSCSI Target Server, each Target can be
accessed by a defined list of initiators. Multiple initiators can make connections to the same
Target. However, this type of configuration is only supported with clustering. Because when
multiple initiators connects to the same Target, all the initiators can read/write to the same
set of iSCSI virtual disks, if there is no clustering (or equivalent process) to govern the disk
access, corruption will occur. With Clustering, only one machine is allowed to access the iSCSI
virtual disk at one time.
IQN: It is a unique identifier of the Target or Initiator. The Target IQN is shown when it is
created on the Server. The initiator IQN can be found by typing a simple iscsicli cmd in the
command window.

Loopback: There are cases where you want to run the initiator and Target on the same
machine; it is referred as loopback. In Windows Server 2012, it is a supported configuration.
In loopback configuration, you can provide the local machine name to the initiator for
discovery, and it will list all the Targets which the initiator can connect to. Once connected,
the iSCSI virtual disk will be presented to the local machine as a new disk mounted. There will
be performance impact to the IO, since it will travel through the iSCSI initiator and Target
software stack when comparing to other local IOs. One use case of this configuration is to
have initiators writing data to the iSCSI virtual disk, then mount those disks on the Target
server (using loopback) to check the data in read mode.

iSCSI Target management overview


Using Server Manager
iSCSI Target can be managed by the UI through Server Manager, or cmdlets. With Server
Manager, a new iSCSI page will be displayed, as follow:

All the iSCSI virtual disk, Target management can be done through this page.
Note: iSCSI initiator UI management is done by the initiator control panel, which can be
launched through Server Manager:

Using cmdlets
Cmdlets are grouped in modules. To get all the cmdlets in a module, you can type

Get-command module <modulename>


iSCSI Target cmdlets: -module iSCSITarget
iSCSI initiator cmdlets: -module iSCSI
Volume, partition, disk, Storage pool and related cmdlets: -module storage
To use iSCSI Target end to end, cmdlets from all three modules will be used as illustrated in
the examples below.

Enable iSCSI Target


Using Server Manager (UI)
iSCSI Target can be enabled using Add roles and features in the Server Manager:
1. Choose the Role-based or feature-based installation option

2. Select the server you want to enable iSCSI Target

3. Select the iSCSI Target Role:

To enable iSCSI Target feature, you should select the iSCSI Target Server feature.
4. Confirm the installation

Using cmdlets
Open the powershell cmdlet window, and run the following cmdlet:
Add-WindowsFeature FS-iSCSITarget-Server

Configuration
Create iSCSI LUN
To share storage, the first thing is to create an iSCSI LUN (aka. iSCSI virtual disk). The iSCSI
virtual disk is backed by a VHD file.

Using Server Manager


Once the iSCSI Target role is enabled, Server Manager will have an iSCSI page:

The first wizard link is to create iSCSI Virtual Disk.

Since Server Manager allows for


multi machine management, the UI
is built to support that. If you have
multiple servers in the
management pool, you can create
iSCSI Virtual disk on any servers
with iSCSI Target enabled from one
management UI.

The UI also pre-populates the Path


to iSCSIVirtualDisks by default. If
you want to use a different one, go
to the previous page, and select
Type a custom path. If the path
doesnt exist, it will be created.

Specify the iSCSI virtual disk size.

Now the wizard will guide you to


assign the virtual disk to an iSCSI
Target.

Give the Target a name. This name


will be discovered by the iSCSI
initiator, and use for the
connection.

This page allows you to specify the


initiators which can access the
virtual disk, by allowing the Target
to be discovered by defined list of
initiators.
Clustering: You can configure
multiple initiators to access the
same virtual disk by adding more
initiators to the list. To add the
initiators, click on the Add button.

The wizard is designed to simplify


the assignment using the server
name. By default, it is
recommended to use IQN. The IQN
is typically long, so the wizard will
be able to resolve the computer
name to IQN if the computer is
Windows Server 2012. If the
initiator is running previous
Windows OS, you can also find the
IQNs as described in the
Terminology section.

CHAP is an authentication
mechanism defined by the iSCSI
standard to secure access to the
target. It allows the initiator to
authenticate to the Target, and in
reverse allowing the Target to
authenticate against the initiator.
Note: You cannot retrieve the
CHAP information once it is set. If
you lose the CHAP information, it
will need to be set again.

Last, the confirmation page.

Once the wizard is completed, the iSCSI Virtual Disk will be shown on the iSCSI Page.

If you want to find all the iSCSI Virtual disks hosted on a volume, one simple way of doing this
is to go to the Volume page, and select the volume. All the iSCSI virtual disks on that volume
will be shown on the page:

Using Cmdlet
Same configuration can also be automated using the cmdlet.
1. LUN creation: New-IscsiVirtualDisk c:\test\1.vhd size 1GB
First parameter is the VHD file path. The file name must not exist. If you want to load an
existing VHD file, use Import-IscsiVirtualDisk command. The size parameter specifies the size
of the VHD file.

2. Target creation: New-IscsiServerTarget TestTarget2 InitiatorIds IQN: iqn.199105.com.Microsoft:VM1.contoso.com


The first parameter is the Target name, and the InitiatorIds stores the initiators which can
connect to the Target.

3. Assign VHD to Target: Add-IscsiVirtualDiskTargetMapping TestTarget2 c:\test\1.vhd

Configure iSCSI initiator to logon the Target


Once the iSCSI Virtual disk is created and assigned, it is ready for the initiator to logon.
Typically, the iSCSI initiator and iSCSI Target are on different machines (physical or virtual).
You will need to provide the iSCSI Target server IP or host name to the initiator, and the
initiator will be able to do a discovery of the iSCSI Target. All the Targets which can be
accessed will be presented to the initiator. If you cannot find the Target name, check
1. The Target Server IP or hostname which was given to the initiator
2. The initiator IQN which assigned to the Target object. It is very common to have a typo in
this field. One trick to verify this, is to assign the Target with IQN:*, which means any
initiator can access this Target. It is not a recommended practice, but a good troubleshooting
technique.
3. Network connectivity between initiator and Target machine.

Using UI
Launch the iSCSI initiator Properties from Server Manager -> Tools

Go to the Discovery tab page and click on the


Discover Portal.
Add the IP address of the iSCSI Target Server.

After discovery, all the Targets from the Server


will be listed in the Discovered Targets box.
Select the one you want to connect, and click
on Connect. This will allow the initiator to
connect to the target and access associated
disks.

Connect button will launch a Connect to


Target dialog box. If target is not configure
with CHAP, you can simply click OK to
connect.
To specify CHAP information, click on the
Advanced button.
Check the Enable CHAP log on box, and
provide the CHAP information.
Advanced configuration by specify IPs for
iSCSI connection: If you want to dedicate
iSCSI traffic to a specific set of the NICs, you
can specify that in the Connect using. By
default, any IPs can be used for iSCSI
connection.
Note if a specific IP is configured, and the IP
address changed due to DHCP, the iSCSI
initiator will not be able to reconnect after
reboot. You will need to change the IP on this
page, then connect.

Connection established.

Using Cmdlet
By default iSCSI initiator service is not started, the cmdlet will not work. If you launch the
iSCSI initiator from the control panel, it will prompt for service start, as well as setting the
service to start automatically.
For the equivalent using the cmdlet, you need to run

Start-Service msiscsi
Set-Service msiscsi StartupType Automatic
1. Specify the iSCSI Target Server name:

New-IscsiTargetPortal TargetPortalAddress Netboot-1

This is similar to the discovery in the UI.


2. Get the available Targets (this is optional):

Get-IscsiTarget

3. Connect:

Connect-IscsiTarget NodeAddress iqn.1991-05.com.microsoft:netboot-1nettarget-target

If you want to connect all the Targets, you can also type:

Get-IscsiTarget | Connect-IscsiTarget
4. Register the Target as Favorite target, so that, it will reconnect upon initiator machine
reboot.

Register-IscsiSession -SessionIdentifier "fffffa80041460204000013700000007"


You can get the sessionIdentifier from output of Connect-IscsiTarget, or Get-IscsiSession

Create new volume


Once the connection is established, the iSCSI virtual disk will be presented to the initiator as a
disk. By default, this disk will be offline,. For typical usage, you want to create a volume,
format the volume and assign with a drive letter so it can be used just like a local hard disk.

Using Server Manager


You can right click on the disk to bring it online, but it is not necessary. If you run the new
volume wizard, it will be brought online automatically.
From Server Manager->File and Storage Services->Volumes->Disks page, check the disk 2 is
in offline mode:

Launch the New Volume Wizard, from

Select the disk.


Disk 2 is the offline iSCSI virtual disk.

UI will bring the disk online, and


initialize it to GPT. GPT is preferred, for
more information, see here. If you have
specific reasons creating MBR partition,
you will need to use the cmdlet.

Specifies the volume size.

Assign drive letter

You can create either NTFS or ReFS


volume. For more information about
ReFS, please see the link

Confirmation page to create the new


volume

Using cmdlet
The following cmdlets are provided by the Storage module:
1. Check if the initiator can see the disk: Get-disk

2. Bring disk 3 online:

Set-disk number 3 IsOffline 0


3. Make disk 3 writable:

Set-disk number 3 isReadOnly 0

4. Initialize the disk 3:

Initialize-Disk -Number 3 -PartitionStyle MBR


5. Create a partition on disk 3 (To avoid a format volume popup in Windows Explorer, lets not
assign a drive letter at this time. We will do that after the volume is formatted)

New-Partition -DiskNumber 3 -UseMaximumSize -AssignDriveLetter:$False


6. Format volume:

Get-Partition DiskNumber 3 | Format-Volume


7. Assign the drive letter now:

Get-Partition DiskNumber 3 | Add-PartitionAccessPath AssignDriveLetter:


$true

FAQs
If you have used previous release of the iSCSI Target, the most noticeable change in Window
Server 2012 is the user experience. Some common questions are:
1. Installing the web download of iSCSI Target for Windows Server 2008 R2 on
Windows Server 2012.
The installation might succeed, but you wont be able to configure it. You need to uninstall the
download, and enable the inbox iSCSI Target.
2. Trying to manage the iSCSI Target with the MMC snapin
The new UI is integrated with Server Manager. Once the feature is enabled, you can manage
the iSCSI Target from the iSCSI tab page. Server Manager\File and Storage Services\iSCSI
3. How to get all the cmdlet for iSCSI Target?
Type get-command module iscsiTarget . The list shows all the cmdlets to manage the iSCSI
Target.
4. Running the cmdlet scripts developed using the previous release
Although most of the cmdlets do work, there are changes to the parameters which may not
be compatible. If you run into issues with the cmdlets developed from the previous release,
please run the get-help cmdname to verify the parameter settings.
5. Running the WMI scripts developed using the previous release
Although most of the WMI classes are unchanged, some changes are not backward
compatible. If you run into issues with the cmdlets developed from the previous release,
please check the WMI classes and its parameters.
6. SMI-S support
iSCSI Target doesnt have the SMI-S support in Windows Server 2012.
7. VHD compatibility

iSCSI virtual disk stores data in a VHD file. This VHD file is compatible with Hyper-V, i.e. you
can load this VHD file using either iSCSI or Hyper-V. Hyper-V in Windows Server 2012 has
introduced a new virtual hard disk format VHDx, which is not supported by iSCSI Target. Refer
the table below for more details:

8. Cmdlet help
If you have any questions about the cmdlet, type get-help cmdletname to learn the usage.
Before you can use get-help, you need to run Update-help module iscsTarget, this allows
the help content to be downloaded to your machine. Of course, this also implies you will need
internet connectivity to get the content. This is a new publishing model of the help content,
which allows for dynamic update of the content.
9. Storage Provider and iSCSI Target Version interop matrix
iSCSI Target has made a few releases in the past, below shows the version numbers and the
supported OS it runs on.
iSCSI Target 3.2 <-> Windows Storage Server 2008
iSCSI Target 3.3 <-> Windows Storage Server 2008 R2 and Windows Server 2008 R2
iSCSI Target (build-in) <-> Windows Server 2012
For each Target release, there is a corresponding storage provider package, which allows the
remote management. The table below shows the interop matrix.

Note:
1: Storage provider 3.3 on Server 2012 can manage iSCSI Target 3.2. This has been tested.
2: the 2012 Downlevel storage provider is a web download, you can find the download
package here.
10. Does iSCSI Target support Storage spaces?
Storage space is a new feature in Windows 8 and Windows Server 2012, which provides
storage availability and resiliency with commodity hardware. You can find more information
about the Storage Spaces feature here. Hosting iSCSI virtual disks on Storage Spaces is
supported. Using iSCSI LUNs in a Storage Spaces pool is not supported. Below is a topology
diagram to illustrate the two scenarios:

Supported
setup

Not supported
setup

11. CSV supports


iSCSI Target doesn't support VHDs hosted on CSVs (Cluster shared volumes).

Conclusion
I hope this helps you get started using the iSCSI Target in Windows Server 2012, or make a
smoother transition from the previous user experience. This blog only covers the most basic
configurations. If you have questions not covered, please raise it in the comments, so I can
address it with upcoming postings.

You might also like