Introduction of iSCSI Target in Windows Server 2012
Introduction of iSCSI Target in Windows Server 2012
Server 2012
The iSCSI Target made its debut as a free download for Windows 2008 R2 in April 2011,
since then, there were more than 60,000 downloads. That was the first step to make it
available for everyone. Now in Windows Server 2012, no more downloads and separate
installation; it comes as a build-in feature. This blog will provide step-by-step instructions
to enable and configure iSCSI Target.
If you are not familiar with iSCSI Target, it allows your Windows Server to share block
storage remotely. iSCSI leverages the Ethernet network and does not require any
specialized hardware. In this release we have developed a brand new UI integrated with
Server manager, along with 20+ cmdlets for easy management. The following references
also provide additional examples/use cases:
A NAS refers to servers that communicate over a network using a file protocol (like CIFS, SMB or
NFS) and a SAN refers to a network that connects application servers to dedicated storage devices,
using protocols like Fibre Channel or iSCSI to transport SCSI 'block' commands and data. NAS
appliances control the file system, while in a SAN, the raw storage is exposed to the application
servers and users can partition and format it using whatever file-system the client supports. Blockstorage devices make it appear to the servers that the storage is a locally-attached hard disk drive.
When you take a Windows Storage Server and you add iSCSI functionality, it becomes a NAS-SAN
hybrid device. This has become known in the industry as a unified storage appliance. Either way, it
is remote storage and it is going to be a big part of the future.
This is the simplest way to understand the Microsoft iSCSI Software Target. Remote VHD files appear on the
application server as locally attached hard disks. Application servers running just about any workload can
connect to the target using an iSCSI initiator.
How do servers and clients access the iSCSI Software Target or file shares on a
Windows Storage Server?
In a typical network, administrators have separate iSCSI networks from user-accessible file-protocol networks
for SMB, SMB 2.0, NFS or CIFS traffic. Windows Storage Server allows a diverse mix of heterogeneous clients
or servers to access data.
Target. One solution, the HP AIO 1200 has a configuration with 12 disks that supports 1500
exchange users. If you added another 12 disks and dedicated NIC, you could probably
support another exchange server in the same configuration. Contrasting this to a low-I/O
workload like an intranet web server you could easily host and consolidate the storage for
dozens of them.
The key takeaway here is that you dont want to oversubscribe any part of the system to more
application servers than the storage server can handle. Testing and validating the system at
peak workloads before deploying into production is an important best practice.
2) Test and Development scenarios are endless, especially for Clustering, Live
Migration, SAN transfer and Storage Manager for SANs.
Setup an iSCSI SAN for a clustered SQL Server on a single laptop! Testing SANenvironments without a SAN or creating a killer demo of a solution running on a single
laptop is actually easy to setup with Hyper-V. See the virtualization section below for some
drawings that outline different options to get storage to Hyper-V virtual machines. You can
test these SAN technologies for $0 out of pocket on industry standard hardware using
Windows Storage Server 2008 and the iSCSI Software Target. Being able to do proof-ofconcept testing with the semantics of a high-availability application attaching to a SAN
without spending a ton of money on a SAN is a big plus.
The bare-minimum configuration is a single-CPU server with a SATA drive and an Ethernet
port. This is great for testing in a developers office, but it will not meet most workload
requirements for data throughput and high availability. Certainly not for production.
If you want to test the throughput of a solution and remove the spinning disks as a bottleneck
you could also use the RAMDISK command:
To create a VHD in system memory, use RAMDISK:<size-in-MB> for the device path.
For example, to create two 100MB VHDs in memory, use the following device paths:
RAMDISK:100 for the first VHD
RAMDISK:101 for the second VHD (we enforce device path uniqueness, so you need to
add 1 to the size to make it unique)
Note: This is an undocumented and unsupported command, but it is useful.
3) Setup an iSCSI SAN for a Windows cluster.
The Microsoft iSCSI Software Target supports persistent reservations so that your storage
resources can failover from one cluster node to another. The Microsoft iSCSI Software Target
supports both SCSI-3 and SCSI-2 reservation commands. Fibre Channel and SAS interfaces
and the associated fabric/switches dominate the market for cluster shared storage today.
Using iSCSI Targets to back a cluster is an option and when you use a nice RAID card with a
bunch of fast drives, you will get much better performance. In an upcoming post we will have
a detailed setup document outlining how to create clustered storage servers with some great
recommendations for highly available file servers and iSCSI Target clusters.
4) Consolidate servers into Hyper-V VMs and migrate the data to a Windows Storage
Server.
Finance dept. wants another server to run their LOB application, but you are out of servers?
Here is one quick solution, convert one of your servers to a Hyper-V server and create several
VMs. After migrating the server instance to a VM, create an iSCSI LUN on a Windows
Storage Server and attach it to the VM and migrate the data to the new LUN. Enable Hyper-V
guests to migrate from one host to another and quickly transport the LUNs from one to
another using SCVMM. Hyper-V and iSCSI storage servers go together like PBJ (Thats
peanut butter and jelly).
5) Diskless SAN boot over iSCSI!
Ok, now we are getting somewhere. While we were just celebrating not putting data disks in
all these servers, why not remove all the disks? You can boot from an iSCSI SAN! Imagine
your datacenter blades humming along without a bunch of spinning platters! Not only does
this save big bucks in hard-disk costs, but it also reduces your power consumption. iSCSI
booting has been possible since the 2003 release of the iSCSI Initiator. If you want to boot a
physical server off of iSCSI, you need an iSCSI-boot capable NIC like the Intel PCI-E
pro/1000 PT or the Broadcom BCM5708C NetXtreme II GigE or you can use an iSCSI HBA
like the Qlogic QLE 4062c.
If you want to boot Hyper-V VMs off iSCSI, you could make the connection in the parent OS
using the iSCSI initiator and then carve up storage for the VMs, but if you want to boot
directly off of iSCSI, you will need a 3rd party solution like DoubleTakes NetBoot/i or
gPXE, which is an open-source bootloader.
Windows doesnt care that there are no hard-disks in the box, as long as the network can
handle it. Checkout the iSCSI Boot Step-by-Step Guide for more information.
6) Bonus storage for people in your organization. Storage Administrators can be a
hero! (for once).
Did you know that you can setup an iSCSI Target with some drives and carve-up and handout the storage to people running Windows Clients? The iSCSI initiator is built into every
version of Windows, you can quickly provision storage and assign it to just about anybody.
Our storage guru recently sent out an email to everybody in the team that said, get 20GB of
storage that will be backed up each week, just send me your IQN (Control Panel > iSCSI
Initiator) and I will grant you access to your personal, private storage. That is pretty cool,
especially when you run out of space or you need a place to backup some files.
Here is a simple configuration using redundant networking. Multipathing using the Microsoft
MPIO framework is recommended to ensure redundancy and maximum throughput. See this
recent Multipath I/O Step-by-Step guide for details. Many storage arrays that are SPC-3
compliant will work by using the MPIO Microsoft DSM. Some storage array partners also
provide their own DSMs to use with the MPIO architecture.
Ok, time to cluster the front-end servers. Now we have a highly-available configuration of
application servers and another cluster for the storage servers.
Now lets talk about using all of it together: clustered front-end (application servers) and/or
back-end (storage servers) along with MPIO. MPIO path failover times can be impacted by
the number of LUNs and the amount of I/O being generated, so make sure you test a fully
configured machine running peak I/O before moving it into production.
We tested failover on various configurations at Microsoft with MPIO while the servers were
being hammered with I/O using Jetstress or IOMeter. Using the inbox Microsoft DSM, we
see good fail-over performance while using two 2-node application server clusters (running
Windows Server 2008) with 32 LUNs for each cluster (a total of 64 LUNs). The key here is
that the high availability failover must be quick enough to support application servers that
throw a fit if the disk stops responding.
When using MPIO in these advanced configurations, the iSCSI Software Target team
recommends using Windows Server 2008 initiators.
*There is limited support for Windows Server 2003 iSCSI hosts when connected to the
Microsoft iSCSI Software Target if the iSCSI hosts or iSCSI Targets are clustered. Failures
on the iSCSI network path may result in delayed failover and recovery times. Failures in nonnetwork related areas have been tested with acceptable recovery times. The time to complete
a failover and recovery may vary and is dependent on the application IO workload at the time
of failure.
Microsoft strongly recommends the use of Windows Server 2008 iSCSI hosts for clustered
configurations when connecting to the Microsoft iSCSI Software Target.
Note: This above is specific to Microsoft iSCSI target configurations. Customers using
Windows Server, the Microsoft iSCSI software initiator and a logod iSCSI hardware array
should refer to the storage array vendor support statements for applicable supported
configurations.
I hope this post was helpful and gives you some ideas on different ways to use your new NAS
device and the Microsoft iSCSI Software Target.
========================================================
=============
65,847 Points 8 3 3
Recent Achievements
Forums Answerer II Blogs All-Star Blog Commentator II
View Profile
Comments 8
Overview
In this post, I will show all the steps required to run Windows Server 2008 Hyper-V with the
Microsoft iSCSI Software Target. We will cover the specific scenario of a standalone Windows
Server 2008 server (as opposed to a clustered one) on a full install (as opposed to a core
install) and using a VHD file (as opposed a pass-through disk).
In order to follow these instructions you will need at least two computers. One computer will
run a full install of Windows Server 2008 with the Hyper-V role enabled. The other computer
needs to be a Windows Storage Server (WSS) with the iSCSI pack or Windows Unified Data
Storage Server (WUDSS). Optionally, you could add a Client for your Virtual Machine and a
computer for remote Hyper-V Management.
Configuring the Networks
For your server running Hyper-V, you should consider having at least three Network Interface
Cards (NICs). One will be dedicated to iSCSI traffic. The second will be connected to the Virtual
Switch and used for traffic going to your virtual machine. The third NIC you will dedicate to
remote management. This configuration is showed in the diagram below:
You should make sure you have the proper credentials (username and password) with
administrator privileges on the Storage Server. You should also make sure you have remote
access to the Storage Server via Remote Desktop. Once you log on to the Storage Server via
Remote Desktop, verify that you can locate the Microsoft iSCSI Software Target Management
Console (MMC), which can be found in the Administration Tools menu. From a Storage Server
perspective, well perform all the configuration actions using the iSCSI Target MMC.
This will bring up the Add Roles Wizard, where you will find Hyper-V on the list of roles:
While configuring the Hyper-V role on the wizard, you should see the three (or more) NICs on
your server on the Create Virtual Networks step.
Make sure you do not select the NICs used for iSCSI traffic and Hyper-V remote management
in the Create Virtual Networks.
You will need to restart the server after you add the Hyper-V role.
Loading the iSCSI Initiator
The next step now is to configure the iSCSI initiator on the Hyper-V server.
You can find the iSCSI Initiator under Administrative Tools in Windows Server 2008. You
can also find it in the Control Panel.
The first time you load the iSCSI initiator, it will ask you two questions.
The first question is about loading the Microsoft iSCSI Initiator service every time:
The second question is about configuring the firewall to allow the iSCSI traffic:
Click on Add Portal to add the information. You will need the IP address of your Storage
Server at this point. Port 3260 is the default.
Now, if you switch over the Targets tab of the iSCSI Initiator Properties windows, you will see
this:
This blank list of targets is expected at this point, since we havent configured any targets yet.
Well do that next.
Creating the iSCSI Target
Now we switch over the Microsoft iSCSI Software Target side, on the Windows Storage Server.
We will create the target using the Microsoft iSCSI Software Target MMC we mentioned before.
After starting the wizard, skip the introduction page by clicking Next.
Next, you will provide the name and description for the target. Well be using simply T1 for
the name.
On the following screen, you need to provide the identification for the target.
Here you can use an IQN (iSCSI Qualified Name) or you can use the advanced setting to go
with an IP address, DNS name or MAC address.
Since our initiator in this case already contacted the Storage Server, you can simply click on
Browse and pick the IQN from there.
You will start the wizard. Click Next on the introduction page.
Next, you will provide a path to the file to use as your virtual disk or LUN. This file will have a
VHD extension.
Next, you will specify the size for the virtual disk or LUN. Well create a 20GB LUN here, which
is enough to install Windows Server 2008 later on. The iSCSI target uses fixed-sized VHD files,
but you can extend them if needed.
Next, you will specify a description for the virtual disk or LUN.
Finally, click Finish to create the virtual disk. Depending on the size, it could take a while.
At this point, you can see the target and its virtual disk on the Microsoft iSCSI Software Target
MMC:
You can check the properties of the target, including the target IQN, by right-clicking the
target name and clicking on Properties.
Once you log on, the target status will change to Connected.
The LUN should also appear in the list of Volumes and Devices in the iSCSI Initiator
Properties:
Now we need to work on that LUN to turn it into an NTFS volume with a drive letter.
That is done in Disk Management.
Preparing the Volume
If you followed all the steps so far, you should already have the LUN as an offline,
uninitialized, unallocated volume in Server Manager, under Disk Management:
The first thing you need to do here is to online the volume, by right-clicking on the disk:
The volume will be onlined automatically if you are running the Standard Edition of Windows
Server 2008.
After that, the volume will be online, but still uninitialized. You will then select the option to
Initialize Disk:
At this point you need to select a partition style (MBR or GPT). The older MBR style is
commonly used for small partitions. GPT is required for partitions larger than 2TB.
After this, you have a basic disk online which you could use to create an NTFS volume. If you
right click it again, there will be an option to create a New Simple Volume.
Once you go through that wizard, format the volume and assign it a drive letter, you will have
the final result in Disk Management as drive E:
There are two places in the New Virtual Machine Wizard where you will refer to the E: disk.
The first one is when you select the location of your virtual machine configuration files:
The second one is when you specify the location of the virtual hard drive used by that virtual
machine:
In this case, by using the wizard, we selected the default option of using a Dynamically
Expanding VHD file that is exposed to the child partition as Virtual IDE.
You can verify that looking at the settings for the resulting Virtual Machine:
If you click on the Inspect button, you can see its a Dynamic VHD:
You could, of course, use any of the other types of VHD files or even a pass-through disk, but
thats a topic for another blog post
Conclusion
I hope this blog post has helped you understand all the steps required to use the Microsoft
iSCSI Software Target to provision storage for your Windows Server 2008 server running
Hyper-V.
This post covered a scenario where Hyper-V runs on a full install of Windows Server 2008,
using a VHD file on the parent and without Failover Clustering.
========================================================
============
Diskless servers can boot and run from the Microsoft iSCSI Software Target using
a regular network card!
Scott M. Johnson
Scott M. Johnson
MSFT
1,427 Points 3 1 0
Recent Achievements
Blogger II New Blogger New Blog Commentator
View Profile
Comments 8
The new Microsoft iSCSI Software Target 3.3 includes support for a new feature called differencing virtual hard
disks (VHDs). This feature helps deploy diskless boot for servers running Windows, especially in a Windows
HPC Server 2008 R2 compute cluster environment.
We worked closely with the HPC team to deliver a simple management experience. The deployment process is
tightly integrated within the Microsoft HPC management pack, which manages the iSCSI target using an HPC
provider. Support for differencing VHDs and iSCSI boot in iSCSI Software Target 3.3 is useful in other
deployments beyond HPC and this post will focus on how to deploy iSCSI boot outside of the HPC
environment. You can get more details about how this works in this blog post.
Advantages of using diskless boot include:
1. Fast deployment: A single golden image can be used to deploy many machines
in parallel. At Microsoft we tested deployments as large as 256 nodes in 30
minutes.
2. Quick recovery: Since the operating system image is not on the server, you
could simply replace the blade and point to the old remote VHD, and boot from it.
No operating system installation is required.
3. Cost reduction: Because many servers can boot off of a single image, CAPEX
and OPEX are directly reduced by having lower storage requirements for the
operating system images. This also reduces the power, cooling, and space
required for the storage.
This post will explain how diskless boot works and how you can try it out yourself!
Terminology
Before going any further, lets clarify a few terms:
Client: The server that will boot from the image stored on the iSCSI target server. It can also be referred as an
iSCSI initiator or a diskless node. Note: Since iSCSI boot is only supported on Windows Server SKUs, the
term Client refers to the iSCSI client machine which runs Windows Server OS.
Golden image: A generalized (sysprepd) image containing an operating system and any other required
applications. It is prepared on an iSCSI LUN and set to be a read-only master image. A section below describes
one way to create the golden image.
HPC: High-performance computing (see here).
iSCSI target: The end point where the iSCSI initiator will establish the connection. Once the initiator connects
to the iSCSI target, all the VHDs associated with that target will be accessible to that initiator.
iSCSI Software Target: Software application which provides iSCSI storage to clients (iSCSI initiators).
Differencing VHD: One of the supported VHD formats used by Microsoft iSCSI Software Target. (For a
definition, please see here.) When using diskless boot, the clients will read from the golden image and write to a
differencing VHD. This allows multiple clients to boot off of the same golden image.
Boot loader: Refers to the software that can bootstrap the operating system. In the iSCSI boot scenario, it
contains a basic iSCSI initiator that can connect to the iSCSI target and mount a disk. It is an alternative solution
to an iSCSI boot-capable NIC/HBA.
Overview
There are two phases when managing diskless clients:
Preparing a golden image. If you are planning to reuse the image, you can
store it in a folder, and copy it out later when you do the deployment.
Create a one or more differencing VHDs, and set the golden image as the
parent VHD
Create an iSCSI target for each differencing VHD and assign the target to
the client.
2. Boot process: This is the normal machine boot process, that is handled by the
iSCSI boot capable hardware or software. (For diagrams on how the boot process
works, see the boot process sections below).
See this TechNet article on iSCSI Boot which covers the more information on this topic. It includes a step-bystep guide for deployment using options 1 & 2 mentioned below.
Hardware/Software options
You will need one of the following options to enable iSCSI boot on a physical client:
An iSCSI boot capable HBA requires similar information to the iSCSI NIC. The follow example shows a
QLogic HBA.
A software boot loader needs more configuration and details are in the workflow section of this post. Examples
of a boot loader are gPXE (open source) and Doubletake Netboot/I (commercial). This post will use the gPXE
as an example. You can also use iPXE instead of gPXE.
Boot Process
There are two stages during machine boot up:
1. Pre-boot
2. Windows boot
The pre-boot phase can be executed by any of the three options described above (NIC, HBA, or software boot
loader).
For hardware, the boot loader is built in the firmware of the NICs or HBAs, and it
can connect directly to the iSCSI Target and mounted to the assigned disk
containing the Windows operating system image.
With a software boot loader, you can put the boot loader image on a CD or a USB
drive, and have the computer boot to the device. Once the computer logs on to
the iSCSI target, it enters the Windows boot phase.
The following diagram describes the components involved on the client computer during the boot process:
share the parameters between the iSCSI boot initiator (which establishes the session in the preboot phase) and
the Microsoft iSCSI initiator (which establish the session after Windows boots).
Hardware (NIC or
HBA)
Software loader
(gPXE)
an iSCSI
boot capable NIC or HBA
1. When the client machine boots up, it reads the Target IP and IQN information, and
the iSCSI NIC/HBA connects to the iSCSI target.
2. The iSCSI Target accepts the connection, and presents a VHD as a disk to the
client. The disk is then mounted on the client.
3. The boot process proceeds as if the image resides on a local disk. Once Windows
starts up, it starts the Microsoft iSCSI initiator which uses the parameters specified
in the iBFT table to connect to the target. You can also find more about iBFT here.
Server with the DHCP and Windows Deployment Services roles installed (WDS
contains the TFTP server feature)
Figure 5 - Boot process when using DHCP, TFTP server with software boot loader
1. The client machine is powered up with PXE boot enabled. It requests the IP
address, the TFTP server location, as well as iSCSI target connection information,
from the DHCP server. (Note, most computers today support PXE boot.)
5. The client machine uses the basic iSCSI boot initiator to log on to the iSCSI target.
Once the connection is established, the disk is mounted on the client machine.
6. The boot process proceeds as if the boot image is on a local disk. Once Windows
starts up, it loads the Microsoft iSCSI initiator which uses the parameters specified
in the iBFT table to connect to the target.
A device (CD/USB, etc.) that can store the software boot loader
1. Set the client BIOS boot order to use the USB/or CD where the boot loader is
stored.
2. When the client machine boots, the boot loader uses the iSCSI target IP and IQN
information, and connects to the iSCSI target.
3. The iSCSI target accepts the connection and presents the VHD to the client. At
this point the disk will be mounted on the client.
4. The boot process proceeds as if the boot image resides on a local disk. Once
Windows starts up, it loads the Microsoft iSCSI initiator. The iSCSI initiator will use
the parameters specified in the iBFT table to connect to the target.
A word of caution: While the iSCSI target is servicing diskless clients, rebooting the iSCSI target server will
cause unexpected behavior for the diskless clients. This is similar to removing the local hard drive while the
machine is running. If you require high availability in your environment, you should consider deploying a
failover cluster with the iSCSI targets LUNs.
Create a 30GB fixed VHD (big enough to install the OS and applications).
For iSCSI boot NICs or HBAs: configure the NIC or HBA to point to the
correct target.
Set the system BIOS boot order to boot from OS installation media second.
gPXE will fail because there is no OS to load, but the iSCSI Connection will be
established. (See the scripts below for an explanation.)
Select the iSCSI LUN to install the OS and finish OS installation normally.
Optional: Install additional applications which you want in the sysprep image.
Sysprep the image of the first client machine. (Find more details on sysprep here.)
Use the syspreped image as the parent VHD for future deployment
(Note, I used this page as reference when I did my setup. You may find it useful as well.)
Custom Script
dhcp net0
set initiator-iqn iqn.1991-05.com.microsoft:iscsiboot-${net0/mac}
set root-path iscsi:10.121.28.150::::iqn.1991-05.com.microsoft:testsvr
-iscsiboot-${net0/mac}-target
set keep-san 1
sanboot ${root-path}
Line 3:
Replace testsvr with the actual iSCSI Target server host name.
Use IPv4, since the current version of the gPXE doesnt support IPv6.
gPXE will use the standard iSCSI TCP port 3260 and LUN 0.
Line 4 : The command set keep-san 1 means keep iSCSI connection when failure
occurs. By setting this, you will be able to install the OS image onto the iSCSI LUN
when creating the golden image.
Create a differencing VHD using WMI scripts (see this sample), and specify
the base VHD which contains the golden image which is the sysprepd OS
images created from the above section.
Boot the client machine. Because the base OS contains the image, it will
load Windows, and continue with the OS finalization phase, where all of the
unique information is saved to the differencing VHD.
P.S. Now you are done, enjoy your new diskless servers!
========================================================
=============
Note: the instructions from the above references are for previous release, they are
not applicable on Windows Server 2012. Please use the instructions provided in
this blog instead.
Overview
There are two features related to iSCSI Target:
The iSCSI Target Server is the server component which provides the block storage to
initiators.
The iSCSI Target Storage Provider (VDS and VSS) includes 2 components:
o VDS provider
o VSS provider
The diagram below shows how they relate to each other:
The providers are for remote Target management. The VDS provider is typically installed on a
storage management server, and allows user to manage storage in a central location using
VDS. VSS provider is involved when application running on initiator is taking application
consistent snapshot. This storage provider works on Windows Server 2012, for version
support matrix, please go to the FAQ section.
As it shown on the diagram, the iSCSI Target and Storage providers are enabled on different
servers. This blog focuses on the iSCSI Target Server, and will provide instructions for enabling
iSCSI Target Server. It is similar in UI to enable the Storage providers, just be sure to enabling
it on the application server.
Terminology
iSCSI: it is an industry standard protocol allow sharing block storage over the Ethernet. The
server shares the storage is called iSCSI Target. The server (machine) consumes the storage
is called iSCSI initiator. Typically, the iSCSI initiator is an application server. For example,
iSCSI Target provides storage to a SQL server, the SQL server will be the iSCSI initiator in this
deployment.
Target: It is an object which allows the iSCSI initiator to make a connection. The Target keeps
track of the initiators which are allowed to be connected to it. The Target also keeps track of
the iSCSI virtual disks which are associated with it. Once the initiator establishes the
connection to the Target, all the iSCSI virtual disks associated with the Target will be
accessible by the initiator.
iSCSI Target Server: The server runs the iSCSI Target. It is also the iSCSI Target role name in
Windows Server 2012.
iSCSI virtual disk: It also referred to as iSCSI LUN. It is the object which can be mounted by
the iSCSI initiator. The iSCSI virtual disk is backed by the VHD file. For the VHD compatibility,
refer to FAQs section below.
iSCSI connection: iSCSI initiator makes a connection to the iSCSI Target by logging on to a
Target. There could be multiple Targets on the iSCSI Target Server, each Target can be
accessed by a defined list of initiators. Multiple initiators can make connections to the same
Target. However, this type of configuration is only supported with clustering. Because when
multiple initiators connects to the same Target, all the initiators can read/write to the same
set of iSCSI virtual disks, if there is no clustering (or equivalent process) to govern the disk
access, corruption will occur. With Clustering, only one machine is allowed to access the iSCSI
virtual disk at one time.
IQN: It is a unique identifier of the Target or Initiator. The Target IQN is shown when it is
created on the Server. The initiator IQN can be found by typing a simple iscsicli cmd in the
command window.
Loopback: There are cases where you want to run the initiator and Target on the same
machine; it is referred as loopback. In Windows Server 2012, it is a supported configuration.
In loopback configuration, you can provide the local machine name to the initiator for
discovery, and it will list all the Targets which the initiator can connect to. Once connected,
the iSCSI virtual disk will be presented to the local machine as a new disk mounted. There will
be performance impact to the IO, since it will travel through the iSCSI initiator and Target
software stack when comparing to other local IOs. One use case of this configuration is to
have initiators writing data to the iSCSI virtual disk, then mount those disks on the Target
server (using loopback) to check the data in read mode.
All the iSCSI virtual disk, Target management can be done through this page.
Note: iSCSI initiator UI management is done by the initiator control panel, which can be
launched through Server Manager:
Using cmdlets
Cmdlets are grouped in modules. To get all the cmdlets in a module, you can type
To enable iSCSI Target feature, you should select the iSCSI Target Server feature.
4. Confirm the installation
Using cmdlets
Open the powershell cmdlet window, and run the following cmdlet:
Add-WindowsFeature FS-iSCSITarget-Server
Configuration
Create iSCSI LUN
To share storage, the first thing is to create an iSCSI LUN (aka. iSCSI virtual disk). The iSCSI
virtual disk is backed by a VHD file.
CHAP is an authentication
mechanism defined by the iSCSI
standard to secure access to the
target. It allows the initiator to
authenticate to the Target, and in
reverse allowing the Target to
authenticate against the initiator.
Note: You cannot retrieve the
CHAP information once it is set. If
you lose the CHAP information, it
will need to be set again.
Once the wizard is completed, the iSCSI Virtual Disk will be shown on the iSCSI Page.
If you want to find all the iSCSI Virtual disks hosted on a volume, one simple way of doing this
is to go to the Volume page, and select the volume. All the iSCSI virtual disks on that volume
will be shown on the page:
Using Cmdlet
Same configuration can also be automated using the cmdlet.
1. LUN creation: New-IscsiVirtualDisk c:\test\1.vhd size 1GB
First parameter is the VHD file path. The file name must not exist. If you want to load an
existing VHD file, use Import-IscsiVirtualDisk command. The size parameter specifies the size
of the VHD file.
Using UI
Launch the iSCSI initiator Properties from Server Manager -> Tools
Connection established.
Using Cmdlet
By default iSCSI initiator service is not started, the cmdlet will not work. If you launch the
iSCSI initiator from the control panel, it will prompt for service start, as well as setting the
service to start automatically.
For the equivalent using the cmdlet, you need to run
Start-Service msiscsi
Set-Service msiscsi StartupType Automatic
1. Specify the iSCSI Target Server name:
Get-IscsiTarget
3. Connect:
If you want to connect all the Targets, you can also type:
Get-IscsiTarget | Connect-IscsiTarget
4. Register the Target as Favorite target, so that, it will reconnect upon initiator machine
reboot.
Using cmdlet
The following cmdlets are provided by the Storage module:
1. Check if the initiator can see the disk: Get-disk
FAQs
If you have used previous release of the iSCSI Target, the most noticeable change in Window
Server 2012 is the user experience. Some common questions are:
1. Installing the web download of iSCSI Target for Windows Server 2008 R2 on
Windows Server 2012.
The installation might succeed, but you wont be able to configure it. You need to uninstall the
download, and enable the inbox iSCSI Target.
2. Trying to manage the iSCSI Target with the MMC snapin
The new UI is integrated with Server Manager. Once the feature is enabled, you can manage
the iSCSI Target from the iSCSI tab page. Server Manager\File and Storage Services\iSCSI
3. How to get all the cmdlet for iSCSI Target?
Type get-command module iscsiTarget . The list shows all the cmdlets to manage the iSCSI
Target.
4. Running the cmdlet scripts developed using the previous release
Although most of the cmdlets do work, there are changes to the parameters which may not
be compatible. If you run into issues with the cmdlets developed from the previous release,
please run the get-help cmdname to verify the parameter settings.
5. Running the WMI scripts developed using the previous release
Although most of the WMI classes are unchanged, some changes are not backward
compatible. If you run into issues with the cmdlets developed from the previous release,
please check the WMI classes and its parameters.
6. SMI-S support
iSCSI Target doesnt have the SMI-S support in Windows Server 2012.
7. VHD compatibility
iSCSI virtual disk stores data in a VHD file. This VHD file is compatible with Hyper-V, i.e. you
can load this VHD file using either iSCSI or Hyper-V. Hyper-V in Windows Server 2012 has
introduced a new virtual hard disk format VHDx, which is not supported by iSCSI Target. Refer
the table below for more details:
8. Cmdlet help
If you have any questions about the cmdlet, type get-help cmdletname to learn the usage.
Before you can use get-help, you need to run Update-help module iscsTarget, this allows
the help content to be downloaded to your machine. Of course, this also implies you will need
internet connectivity to get the content. This is a new publishing model of the help content,
which allows for dynamic update of the content.
9. Storage Provider and iSCSI Target Version interop matrix
iSCSI Target has made a few releases in the past, below shows the version numbers and the
supported OS it runs on.
iSCSI Target 3.2 <-> Windows Storage Server 2008
iSCSI Target 3.3 <-> Windows Storage Server 2008 R2 and Windows Server 2008 R2
iSCSI Target (build-in) <-> Windows Server 2012
For each Target release, there is a corresponding storage provider package, which allows the
remote management. The table below shows the interop matrix.
Note:
1: Storage provider 3.3 on Server 2012 can manage iSCSI Target 3.2. This has been tested.
2: the 2012 Downlevel storage provider is a web download, you can find the download
package here.
10. Does iSCSI Target support Storage spaces?
Storage space is a new feature in Windows 8 and Windows Server 2012, which provides
storage availability and resiliency with commodity hardware. You can find more information
about the Storage Spaces feature here. Hosting iSCSI virtual disks on Storage Spaces is
supported. Using iSCSI LUNs in a Storage Spaces pool is not supported. Below is a topology
diagram to illustrate the two scenarios:
Supported
setup
Not supported
setup
Conclusion
I hope this helps you get started using the iSCSI Target in Windows Server 2012, or make a
smoother transition from the previous user experience. This blog only covers the most basic
configurations. If you have questions not covered, please raise it in the comments, so I can
address it with upcoming postings.