0% found this document useful (0 votes)
7 views

FlashArray Lab Guide

This lab guide provides instructions for managing and configuring a FlashArray™ through various exercises, including GUI navigation, host and volume management, and performance analysis. It is designed for storage administrators and IT professionals, offering hands-on practice with tasks such as creating hosts, connecting volumes, and utilizing replication features. The guide emphasizes the importance of accessing the latest documentation via the Pure Storage support portal.

Uploaded by

robinrajd13
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

FlashArray Lab Guide

This lab guide provides instructions for managing and configuring a FlashArray™ through various exercises, including GUI navigation, host and volume management, and performance analysis. It is designed for storage administrators and IT professionals, offering hands-on practice with tasks such as creating hosts, connecting volumes, and utilizing replication features. The guide emphasizes the importance of accessing the latest documentation via the Pure Storage support portal.

Uploaded by

robinrajd13
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

LAB GUIDE

FlashArray™
Basic Array Management

Version 2
March 23, 2021
LAB GUIDE

Contents
Summary ................................................................................................................................................................................ 3

Audience ................................................................................................................................................................................ 3

Introduction ........................................................................................................................................................................... 4

Basic Management Course Labs ........................................................................................................................................... 5


Exercise 1: An Overview of the GUI / Estimated Completion Time – 30 minutes ............................................................................ 5
Exercise 2: FC Hosts and Volumes / Estimated Completion Time – 45 minutes ............................................................................. 8
Exercise 3: iSCSI Host and Volume / Estimated Completion Time – 45 minutes........................................................................... 13
Exercise 4: Analyzing Performance / Estimated Completion Time – 45 minutes .......................................................................... 17
Exercise 5: Snapshots / Estimated Completion Time – 45 minutes ..................................................................................................... 20
Exercise 6: Asynchronous Replication / Estimated Completion Time – 45 minutes ......................................................................25
Exercise 7: Near Synchronous Replication with ActiveDR - Estimated Completion Time – 45 minutes .................................... 32
Exercise 8: Synchronous Replication with ActiveCluster / Estimated Completion Time – 45 minutes ...................................... 45

2
LAB GUIDE

Summary

The purpose of this lab guide is to provide the user with some practice with the management and
configuraton of an FlashArray™. Please use the guide as a starting point to investigate the options
available. Further dcoumentation can be found in the User Guide and on-line help.

Pure Storage recommends that you always have access to the latest information and guides via our
support portal at: https://support.purestorage.com.

Audience

This guide is intended for storage administrators, IT professionals and other interested parties.

3
LAB GUIDE

Introduction

Overview of the Lab Environment and Expectations


The lab environment consists of two Windows Server VMs and two FlashArray VMs. The virtualized FlashArray will appear
slightly differently than a physical array, some of those differences will be further clarified in upcoming lab exercises. These
Virtual Machines are connected to the same network for both management and iSCSI traffic.

Click the play button at the top-right corner to run FlashArray VM1 and Windows Server1 VMs at the same time. If the VMs are
busy, this means the VMs are not ready. Once they are green and show “Running,” they are ready for use.

Access to the array will be accomplished by logging in to the Windows server. Click on the Windows Server1 VM tile to get
started. Click the Ctrl-Alt-Del button on the top selection bar. Credentials are as follows:

Windows Server (10.1.1.12)

• Username: Administrator

• Password: pureuser

PurityVM01 (10.1.1.11)

• Username: pureuser

• Password: pureuser

Once you are logged in, in some cases the Network settings panel will pop up. Please click No and proceed.

4
LAB GUIDE

Basic Management Course Labs

Exercise 1: An Overview of the GUI / Estimated Completion Time – 30 minutes


Learning Objectives
• Learn how to access and navigate the different sections of the GUI

• Identify the functions of each GUI section

• Describe how “Audit Trail” shows the CLI command corresponding to each GUI administrative task

Task 1: Explore the “DASHBOARD” section


1. Launch Google Chrome to open a browser session.

2. Navigate to https://10.1.1.11 to access the GUI

3. Accept the self-signed certificate

4. Click Advanced, then click Proceed to 10.1.1.11

5. Use the following credentials unless otherwise instructed.

Log-in Credentials

User pureuser

Password pureuser

5
LAB GUIDE

6. You will automatically land in the “DASHBOARD” section of the GUI. This section is divided into a navigation pane, an
alerts pane and a Dashboard pane, as seen below.

7. Open the User Guide by hovering over the Help link in the Navigation Pane.

8. Record the definition of capacity types listed in the guide 1:

9. Volumes __________________________________________

10. Snapshots __________________________________________

11. Shared __________________________________________

12. System __________________________________________

13. Empty __________________________________________

14. Close the User Guide and return to the Navigation Pane.

15. What version of Purity is installed? ______________________________ 2

16. What is the Array Name? ______________________________ 3

17. In the Capacity section of the Dashboard Pane, note the Total Capacity available.
a. ______________________________
18. The Hardware Health section of the Alerts pane shows a rendering of the array.

19. Note the array model in the text above the rendering ______________________ 4

20. How many controllers are shown and why is this different than a standard FlashArray? ______________________________ 5

1 If you have trouble finding these definitions, navigate to Using the GUI to Administer a FlashArray > Dashboard > Capacity.

2 Purity//FA 6.0.3 (found in the bottom left corner of the GUI)

3 PurityVM01 (found just above Purity version)

4 VMware - on a physical array, the specific model of controllers is listed.

5 The virtualized array only has one controller. A physical FlashArray has two controllers connected via Non-Transparent Bridging protocol over PCIe. The VM also has significantly less capacity
than a physical array.

6
LAB GUIDE

Task 2: Explore the “HEALTH” section


1. On the Dashboard tab, clicking array picture on the right view will take you to the Health tab and check that all hardware
elements are visible and in a healthy state.

2. Once in the “Health” section, note the Raw Capacity available in both T and TB

a. _____________________________________________________________ 6

b. Note the difference between the raw and total capacity? __________________

c. Explain the difference between these values ___________________________ 7

d. Can you estimate how much Effective Capacity is available assuming a 5:1 data reduction rate?
__________________________________________________ 8

3. Hover over one of the Flash modules. You can turn on the ID light on a flash module, or any other hot-swappable
component, if you need to identify it to someone in the datacenter, for replacement or reseating for example.

4. Hover over the Fibre Channel ports and notice that the last octet of the assigned PWWN matches the physical location of
the port: controller ID and FC port number.

Task 3: Explore the “Settings” Section


1. Click on the “Settings” link in the navigation pane

2. Click on the “System” tab within the Settings pane if not already selected

3. Add your email address as an “Alert Watcher”

4. Set a login banner to “For Authorized Use Only”

5. Schedule a maintenance window for the next two hours. This will add an alert tag to any automated alerts informing Pure
Storage Support that the alert was generated due to maintenance and can be associated with the preexisting
maintenance case.

6
T = Tebibytes (1024GB) ; TB = Terabytes (1000GB) - All other capacity references in the GUI use T
7
The raw capacity includes space reserved for RAID, Garbage Collection, and metadata.
8
Total Capacity * 5

7
LAB GUIDE

6. Look at the setting for “Array Time”. NTP setup is critical for proper array functionality.

7. How many NTP servers are there? ________________________________________ 9

8. Click on the “Network” tab within the Settings pane.

9. Look at “DNS settings”. Defining a DNS server is required for “Phone Home” and “Remote Assist” functionality. A DNS
server must also be defined to have the array managed through “Pure1”, which is a cloud-based management and
monitoring portal. 10

10. Click on the “Users” tab within the Settings pane to see an “Audit Trail”. This audit log shows the CLI commands
corresponding to administrative tasks performed in the GUI.
a. What is the most recent entry in the audit log ________________________________________? 11

Exercise 2: FC Hosts and Volumes / Estimated Completion Time – 45 minutes


Learning Objectives
• Add a host in the GUI and connect volumes (LUNs) to it

• Connect clustered Fibre Channel initiators to shared storage using a Host Group

• Connect Fibre Channel initiators to private storage

Overview
In this exercise we will use the following sample scenario: Create an Oracle Real Application Cluster (RAC) with three nodes.
Each node will have shared access to the database volume and the redo logs. Each individual server will also have its own
volume for scratch space. Here is the final state for those who are allergic to step-by-step instructions. For the rest, detailed
steps follow.

Three private LUNs 50GB each


Two shared LUNs
OraVol01 OraVol02 OraVol03 200GB each

OraSrv01 OraSrv02 OraSrv03


WWN1: WWN1: WWN1:
00:01:00:01:00:01:00:01 00:03:00:03:00:03:00:0 00:05:00:05:00:05:00:
WWN2: 3 05
00:02:00:02:00:02:00:0 WWN2: WWN2: OraData OraRedo
2 00:04:00:04:00:04:00:0 00:06:00:06:00:06:00:
4 06

OraRAC Host Group

Task 1: Create the Hosts


1. If not already logged in, open a browser window and navigate to 10.1.1.11.

9
Defining three NTP servers rather than two is best practice. These three are default values and should be changed to meet customer requirements.
10
A maximum of three DNS servers can be defined
11
The most recent entry should be the command to set the maintenance window

8
LAB GUIDE

2. Log on with the username: pureuser and password: pureuser

3. Click on the “Storage” link in the navigation pane

4. Click on the “Hosts” tab in the Storage pane

5. Click the plus (+) near the far right

6. Click “Create Multiple.” This option allows an administrator to create many similarly named hosts in one step.

7. Provide the requested information as seen in the screenshot below.

Task 2: Assign Fibre Channel Port World-Wide Names (PWWN)

1. Click the link for OraSrv01. This will take you to the host configuration window. You’ll see any volumes connected to this
host, the Protection Groups this host belongs to, addresses for the host, and additional details about the host. These
fields are all empty right now. Let’s add the WWNs for this host.

2. Click the ellipsis next to Host Ports.

3. Click “Configure WWNs.” The FlashArray supports connectivity from Fibre Channel, iSCSI, or NVMe over Fabrics
(NVMe/oF) initiators.

4. We don’t have any actual FC initiators in this environment but have created 10 virtual initiator ports. The FlashArray will
query the FC NameServer to discover any existing, properly zoned initiator WWNs that have not already been assigned to
a host and display them for selection. Pretending each of our Oracle servers has two FC ports, select the WWNs that end
with 1 and 2 then click Add, as seen in the screenshot.

9
LAB GUIDE

5. You will be taken back to the Host Configuration screen where you will now see the assigned WWNs listed.

6. Click the Hosts tab to return to the list of hosts and repeat the steps to assign WWNs to OraSrv02 (WWNs ending in 03 &
04) and OraSrv03 (WWNs ending in 05 & 06).

7. Your Hosts tab should now look like the screenshot below. What is the protocol being used to communicate with these
hosts? How does the array know this? 12

Task 3: Add Hosts to a Host Group

1. From the Hosts tab, click the plus in the “Host Groups” pane.

2. Name the group OraRAC and click “Create.”

12
The Interface column shows “FC” to indicate that the host is connected to the FlashArray via
Fibre Channel. This is because we assigned FC WWNs to the hosts.

10
LAB GUIDE

3. Click the link for the OraRAC host group. Here you will see any hosts that belong to this host group, volumes that are
shared across hosts in the group, and protection groups to which the host group belongs.

4. Click the ellipsis next to “Member Hosts” and click “Add.”

5. This brings you to a selection window where you can add hosts that are not currently in a host group. Select the servers
we previously defined (OraSrv01, OraSrv02, and OraSrv03) and click “Add.”

TIP: You can select all available hosts by checking the box at the top of the column.

CHECKPOINT – Ensure that your Hosts window matches the screenshot below. Validate the Host Group and Interface
columns. If your configuration matches, congratulations! You may now proceed to Task 4.

11
LAB GUIDE

Task 4: Create Volumes

1. Click the Volumes tab.

2. Click the plus sign in the Volumes pane.

3. Name the volume OraData and configure the size as 200G.

4. There are options to configure QoS to the volume as well as a


container for the volume, such as an Activecluster pod or a
volume group. We will use these features in later exercises but
leave them blank for now. Click “Create”

5. Repeat these steps for another 200G volume named OraRedo.

6. Now click the plus sign to create another volume but this time
click “Create Multiple.”

7. Just like the option to create multiple hosts, this option allows us
to create many volumes with a similar naming convention and the
same size. Name the volumes OraVol# with a size of 50G. Start at
1 and create 3 with two digits in the name, as seen in the
screenshot here.

8. Click “Create.”

Task 5: Connect Volumes to Hosts

1. Click the Hosts tab. Connecting hosts and volumes can be done from the Volumes tab or the Hosts tab. We will use the
hosts tab.

2. Click the link for OraRAC in the Host Groups pane.

3. Click the ellipsis next to “Connected Volumes” and click “Connect.”

4. A familiar selection window will appear. Select the OraData and OraRedo volumes.

5. What is listed in the LUN field? __________________________

6. Click “Connect.”

7. This brings you back to the OraRAC configuration window. What are the LUN IDs assigned to the two volumes?

8. OraData: _______

9. OraRedo:_________

We have added the shared access volumes for all three nodes in the RAC cluster. All three hosts will be able to read
and write to these volumes. We now need our private volumes for each server.

1. In the “Member Hosts” pane, click the link for OraSrv01.

2. Click the ellipsis next to “Connected Volumes” and click “Connect.”

3. A familiar window again appears. What is listed in the LUN field? _________________

4. Select OraVol01 and click “Connect.”

12
LAB GUIDE

5. What is the LUN ID for OraVol01? __________________

6. Explain the “Shared” column and its respective values :


________________________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________________ 13

7. Return to the Hosts tab and repeat steps 7-10 for OraSrv02 and OraSrv03 for their corresponding volumes.

CHECKPOINT – Ensure that your Hosts window matches the screenshot below. Validate the “# Volumes” column for
both Hosts and Host Groups panes. If your configuration matches, congratulations! You have completed this exercise
and may move on to Exercise 3.

Exercise 3: iSCSI Host and Volume / Estimated Completion Time – 45 minutes


Learning Objectives

• Add an iSCSI initiator and connect it to multiple volumes

• Add volumes to a volume group for reporting purposes

• Mount the volume in Windows and start a workload

Overview

In this exercise we will use the following sample scenario: A single Windows server is running an application using iSCSI-
attached storage presented from multiple volumes. We want to show performance for all volumes attached to the server. We
will be using the Windows VM as our application server, IOMeter as the application, volumes presented from the FlashArray VM
as the storage, and a volume group to aggregate the performance statistics. Here is the final state for those who are allergic to
step-by-step instructions. For the rest, detailed steps follow. Since we’ve already created a host and volume connection in the
previous step using the GUI, this guide will walk you through using the CLI to perform these steps this time.

Three volumes 100G each

AppServer
IQN:
iqn.1991-05.com.microsoft:host-1 AppData1 AppData2 AppData3

AppData Volume Group

Task 1: Create the Volumes inside the Volume Group

1. From the Windows Desktop, open Putty.

2. In the Hostname field, type the IP Address 10.1.1.11 and click “Open.”

3. Accept the ssh key if prompted.

4. Login using the default username and password (pureuser / pureuser)

13
By default, shared LUNs (LUNs presented to more than one host) are numbered starting at 254 and decrementing the LUN ID by one for each shared LUN. Private LUNs (LUNs presented to only
one host) are numbered starting at 1 and incrementing by one for each LUN presented to the host.

13
LAB GUIDE

5. Type purehelp to get a full list of available commands.

purehelp

6. Type pureman purevol to access the manual page for the purevol command

pureman purevol

7. Type purevol create -h to get syntax help for this command. Manual pages and command help are available for any
command. You can use the -h option to list help for any subcommand and Tab completion also works for most
commands.

purevol create -h

8. Create three volumes, each 100G, called AppData1, AppData2, and AppData3 using the following command: purevol
create AppData1 AppData2 AppData3 --size 100g

purevol create AppData1 AppData2 AppData3 --size 100g

9. List your volumes: purevol list and verify the output below:

purevol list

Task 2: Create the Host and connect it to the volumes

1. In order to create the host, we will need its IQN. Find the server’s IQN by clicking on the Start Menu. Type “iscsi” and open
the iSCSI Initiator.

2. Click the configuration tab. The window should look like this. See screencap.

14
LAB GUIDE

3. What is the server’s IQN? __________________________14

4. Leave this window open in the background, as you will return to it in Task 3. Now return to your Putty session and add the
host using the following command (if different, change the IQN to match what you recorded above): purehost create
AppServer --iqnlist iqn.1991-05.com.microsoft:host-1

purehost create AppServer --iqnlist iqn.1991-05.com.microsoft:host-1

5. Connect the host to the volumes you created earlier: purevol connect AppData1 AppData2 AppData3 --host
AppServer

purevol connect AppData1 AppData2 AppData3 --host AppServer

CHECKPOINT – Ensure that your hosts and volumes have been properly connected to your volumes. LUN IDs may
differ slightly, that’s fine. Otherwise, if your configuration matches, congratulations! You may now close your Putty
session and proceed to Task 3.

14
iqn.1991-05.com.microsoft:host-1

15
LAB GUIDE

Task 3: Mount the volumes in Windows

1. Return to the iSCSI Initiator Properties window. Click the Targets tab.

2. In the Target field, type 10.1.1.10 and click Quick Connect.

3. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.

4. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as in the screenshot:

This is a basic iSCSI connection and is all we need for lab purposes.
There is additional information for adding sessions, enabling
multipathing, and other best practices here:

https://support.purestorage.com/Solutions/Microsoft_Platf
orm_Guide/aaa_Quick_Setup_Steps/Step_05.1_--
_Setup_iSCSI_Connectivity

5. Click OK to close the iSCSI Initiator Properties window.

6. Right-click the start menu and click Run

7. Type diskmgmt.msc and click OK.

8. The Disk Management window will appear. A window to initialize a new disk may appear. If so, click Cancel.

9. At the top of the Disk Management window, click Action > Rescan Disks

10. Once the rescan is completed, ensure that Disk1, Disk2, and Disk3 all have a state of “Online” or “Not Initialized.” If any are
offline, right-click the disk label and click Online.

11. Once all disks are online, right-click Disk 1 at the far left, then click “Initialize Disk.”

12. In the Initialize Disk window, all three new volumes should be discovered. Leave the default selections and click OK.

13. Once all disks are initialized, right-click the partition next to Disk 1 and click “New Simple Volume” to mount the volume
presented from the FlashArray.

14. Leave all values as default and continue through the prompts. Repeat this process for the two remaining disks.

CHECKPOINT – Ensure that your Disk Management window appears like the screenshot below before closing the
window and moving on to Lab Exercise 4.

16
LAB GUIDE

Exercise 4: Analyzing Performance / Estimated Completion Time – 45 minutes


Learning Objectives

• Start a workload using IOMeter

• Add volumes to a volume group for reporting purposes

• Analyze performance

Overview

In this exercise we are going to start running a workload using IOMeter on the Windows server and analyze the performance
statistics using the GUI. Note: Remember that this is a virtual array with cloud-based storage so the performance you see in
this lab exercise will be greatly inferior to the performance you would achieve on a physical FlashArray.

17
LAB GUIDE

Task 1: Start the workload

1. Double-click the iometer shortcut on the Desktop.

2. Right-click the Host and select “Refresh Target Lists.”

3. Expand the plus next to Host and select Worker 1

4. Check the box next to “E:” in the Targets window to the right.

5. Select Worker 2 then check the box next to “F:” in the Targets window.

6. Select Worker 3 then check the box next to “G:” in the Targets window.

7. Click the green flag at the top of the window and click “cancel” on the pop-up.

8. Once you see “Run 1 of 1” in the bottom right corner, your workload is running.

Task 2: Analyze performance

1. If not already logged in, open a browser window and navigate to 10.1.1.11.

2. Login using the default credentials: pureuser / pureuser

3. You should see performance statistics in the Dashboard view now. This shows the workload currently running from the
Windows server.

4. What are the available intervals for viewing performance statistics in the Dashboard?
_________________________________________________________________ 15

5. Click “Performance” under Analysis in the navigation pane.

6. What are the maximum and minimum intervals for viewing performance statistics in the Analysis window?
_________________________________________________________________ 16

7. Change the interval to 5 Minutes.

8. Place the cursor over a point on the performance graph. Note the following:

9. Write Latency: _____________________________

10. Queue Time: ______________________________

11. Write IOPS: _______________________________

12. Write Bandwidth: ___________________________

13. What are the three IO Types next to the interval drop-down?
__________________________________________________________________ 17

14. Uncheck “Read” and “Mirrored Write.”

15. Place the cursor over a point on the performance graph. Note the additional statistics now available when only the IO type
of “Write” is selected:

15
5 Minutes and 24 Hours
16
Minumum of 5 minutes to a maximum of 1 year of historical performance data.
17
Read, Write, Mirrored Write

18
LAB GUIDE

16. SAN Time: _______________________________

17. QoS Rate Limit Time: _______________________

18. Write Average IO Size: ______________________

19. Hover over the Help link in the navigation pane and click the FlashArray User Guide.

20. Click “Using the GUI to Administer a FlashArray” then “Analysis” then “Performance.”

21. Summarize in your own words the definition of “SAN Time”:


_________________________________________________________________ 18

22. Return to the Array GUI tab and click Volumes at the top of the Performance pane.

23. Here you’ll see the same performance statistics available for each individual volume.

24. In the case of our Windows server there are three volumes of interest, our AppData volumes. When we select these three
volumes, we see three lines (Read, Write, Mirrored Write) for each individual volume on the performance graph. We want
to see the aggregated performance for all the volumes connected to our AppServer. This is the main purpose of the
“volume group” container.

25. Click “Storage” in the navigation bar, then click “Volumes.”

26. Click the plus sign in the Volume Groups pane.

27. Name the group AppData and click “Create.”

28. Click the link for AppData in the Volume Groups pane.

29. Click the ellipsis in the Volumes pane and click “Move In…” as shown in the screenshot.

30. Select the three AppData volumes and click “Move.”

31. Click “Performance” in the navigation pane again and click Volumes.

32. Change the “Volumes” drop-down to “Volume Groups” and select AppData.

18
SAN time measures latency external to the array, including host latency.

19
LAB GUIDE

33. We now see a single set of lines representing the aggregate performance for all of the volumes in the volume group. You
may need to wait a minute or two for the graph to populate because we will only see statistics from the point the volume
group was created.

34. Return to the IOMeter application window by clicking its icon in the taskbar.

35. Click Stop to end the workload.

36. Close the IOMeter window.

Exercise 5: Snapshots / Estimated Completion Time – 45 minutes


Learning Objectives

• Learn how to create local snapshots using GUI and CLI

• Describe how ZeroSnap Snapshots are space efficient

• Identify how snapshots are used on the array to create critical restore points

• Learn how to recover from existing snapshots

Overview

In this exercise, we will use the previous sample scenario and create a local snapshot for the AppData1 volume as a critical
restore point for the application server. We will simulate a scenario in which someone accidentally deleted important files from
the AppData1 volume and now needs to restore from a snapshot.

Three volumes 100G each

AppServer
IQN:
iqn.1991-05.com.microsoft:host-1 AppData1 AppData2 AppData3

Snapshot

AppData Volume Group

Task 1: Take a snapshot of AppData1 volume

1. Create a new file in New Volume (G:)

2. Right-click the Start menu and click Run

3. Type cmd to open the Windows command prompt

4. Type echo "This is a really important file that I hope never gets deleted!" > g:\important.txt and hit Enter

5. Type for /L %i in (1,1,24) do type g:\important.txt >> g:\important.txt and hit Enter. This operation will take some time to
complete, as it is populating the new g:\important.txt file with roughly 2GB of repeating data. As a result, the AppData1
volume will now have a high deduplication rate and your overall Shared space on the array will increase.

6. If not already logged in, open a browser session to https://10.1.1.11 to access the GUI

20
LAB GUIDE

7. Log in with the username: pureuser and password: pureuser

8. The Capacity pane in the Dashboard will display some “Unique” and “Shared” space consumed, as in the screenshot
below. 19

9. Click on the “Storage” link in the navigation pane

10. Click on the “Volumes” tab in the Storage pane

11. Note the “Volumes” space consumed for AppData/AppData1: _______________________.

Do not proceed until you see some volume space reflected in AppData1 volume

12. Click the link for AppData/AppData1

13. Click the plus + in the “Volume Snapshots” pane

14. Click “Create”

19
It may take about 10 minutes for the capacity change to be reflected in the GUI.

21
LAB GUIDE

15. You should see a snapshot created as seen below with zero space consumed. Why does the snapshot size show 0.00? 20

16. You can also create snapshots using the CLI or REST API. The audit log captures all changes to the system in CLI syntax.
See the snapshot example below:

To see the Audit log go to Settings > Access and view the Audit Trail. The CLI omits the “Name” column so the
equivalent CLI command would be purevol snap AppData/AppData1.

Task 2: Recover data from a snapshot

1. Now delete the g:\important.txt file. Shed a tear or two, then empty the Recycle Bin.

2. Now go to linkedin.com and update your profile … No, wait! Go to Storage > Volumes and click the link for
AppData/AppData1.

3. See the snapshot you took earlier in the Volume Snapshots pane. Breathe a sigh of relief. Happy dance is optional but if,
like Super Mario at the top of a magic beanstalk, you feel so moved, see screenshot below for reference.

4. Now let’s recover that data! Click the ellipsis next to the latest snapshot in the “Volume Snapshots” pane and click
Restore.

20
This is because the volume’s data has not changed since taking the snapshot. The snapshot only protects the existing data, does not create an additional copy of the existing data.

22
LAB GUIDE

5. Notice the warning. If we had active I/O writing to this volume, a “Restore” would overwrite any changes more recent than
the snapshot. Click Cancel.

6. Instead let’s create a copy (even though we haven’t made any changes since the snapshot). Repeat step 4 but click Copy.

7. Name the copy AppData1_clone and click Copy. (Leave the volume group as AppData.)

8. Go back to the Storage > Volumes window and click the link for your clone.

9. Click the ellipsis in the Connected Hosts pane and connect AppServer.

10. Right-click the start menu and click Run

11. Type diskmgmt.msc and click OK

12. The Disk Management window will appear. You should see Disk 4 (Offline). Right-click the disk label and click Online

13. You should see a new volume presented to the host

23
LAB GUIDE

14. Copy h:\important.txt to g:\.

15. You can now disconnect, destroy, then eradicate the AppData1_clone volume:

16. Go to Storage > Volumes and click on AppData1_clone

17. Click the ellipsis at the top and click Destroy

18. Click Destroy in the confirmation windows that pops up.

19. Attempting to destroy a connected volume will generate the message below.

20. Click Cancel

21. Click the x next to AppServer in the Connected Hosts pane and click Disconnect in the confirmation that pops up.

22. Now repeat steps 18 & 19. Notice at the bottom of the Volumes pane, the number next to Destroyed has changed to (1).
Destroyed volumes are kept in a “pending eradication” state for 24 hours. Once the timer expires, the volume and all its
data (including snapshots) are expunged and no longer available for recovery.

24
LAB GUIDE

23. Click the drop-down arrow to show the destroyed volumes.

24. Click the trash icon to “Eradicate” the volume and click Eradicate in the resultant confirmation window.

Exercise 6: Asynchronous Replication / Estimated Completion Time – 45 minutes


Learning Objectives

• Use Protection Groups to schedule snapshots and async replication

• Connect two Flash Arrays for replication purposes

• Replicate data to a target array using a Protection Group replication schedule

Overview

In this exercise, we will use the previous sample scenario and create a local snapshot for the AppData1 volume as a critical
restore point for the application server. We will simulate a scenario in which someone accidentally deleted important files from
the AppData1 volume and now needs to restore from a snapshot.

You will need to power on FlashArray VM2 host from your lab portal. Click the Play icon and wait for the array to boot.

Access to the array will still be accomplished by logging in to Windows Server1. Credentials are as follows:

25
LAB GUIDE

Credentials Username Password

Windows Server1 (10.1.1.12) Administrator pureuser

FlashArray VM1 (10.1.1.11) pureuser pureuser

FlashArray VM2 (10.1.1.21) pureuser pureuser

In this exercise, we would like to provide remote protection for all volumes connected to AppServer. We have added a second
array (normally at a DR or test site) as a replication target.

Task 1: Create a protection group and add a host to it

1. If not already logged in, open a browser session to https://10.1.1.11 to access the GUI

2. Log on with the username: pureuser and password: pureuser

3. Click on the “Protection” link in the navigation pane

4. Click on the “Protection Groups” tab

5. Click the plus + in the Source Protection Groups pane

6. Type “ProtectionGroup” as Name and click “Create”

7. Click the link for ProtectionGroup

8. Click the ellipsis next to the “Members” pane. What are the options? _____________________________________________________ 21

9. Click “Add Hosts”

10. Select AppServer and click “Add”

Hosts, Host Groups, Volumes – Volumes will include only the specified volumes in the snapshot schedule. Hosts will include current and future volumes connected to the specified host. Host
21

Groups will include current and future volumes connected to all current and future Hosts in the specified Host Group.

26
LAB GUIDE

Task 2: Enable local snapshot scheduling

1. Click the edit in the Snapshot Schedule pane

2. Enable Snapshot Schedule, leaving all values default, as seen below and click “Save”

3. Please wait for 10-20 seconds until PurityVM01 creates its first snapshot of ProtectionGroup. This happens automatically
as a result of enabling the Snapshot Schedule. New snapshots can be seen in the Protection Group Snapshots pane.

4. Click the link for ProtectionGroup.1. You will see the snapshots of all volumes connected to AppServer as follows:

27
LAB GUIDE

5. Close the window and proceed to Task3.

Task3: Establish a connection to a second FlashArray for the purpose of Asynchronous Replication.

1. Access PurityVM02 by opening a new browser tab to https://10.1.1.21

2. Click Advanced Options and continue to the site as done previously for PurityVM01.

3. Log in with the username: pureuser and password: pureuser

4. Click on the “Storage” link in the navigation pane

5. Click on the “Array” tab in the Storage pane if not already there

6. Click the ellipsis in the Array Connections pane and click Get Connection Key.

7. The connection key is a globally unique identifier for this specific array, ensuring that the target array matches the
intended target. Click Copy, then OK.

8. Switch back to your PurityVM01 tab.

28
LAB GUIDE

9. Click on the “Storage” link in the navigation pane.

10. Click on the “Array” tab in the Storage pane if not already there.

11. Click the + in the Array Connections pane

12. Fill the values using the following:

Management Address: 10.1.1.21

Type: Async Replication

Connection Key: Paste the value you copied in step 7

13. Click “Connect”

14. If the network between the two arrays is connected, PurityVM02 will now appear with a “connected” status in the Array
connections pane.

Task 4: Enable replication of ProtectionGroup data to target array

1. On PurityVM01 (10.1.1.11)

2. Click on the “Protection Groups” tab in the Protection pane

3. Click on the “ProtectionGroup” link in the Source Protection Groups pane

4. Click the ellipsis in the Targets pane, click “Add”

5. Select “PurityVM02” and click “Add”

29
LAB GUIDE

6. Click edit in the Replication Schedule pane

7. Enable Replication Schedule, leave all values default, and click “Save”

8. Switch to the target array – PurityVM02 (10.1.1.21)

9. Click on the “Storage” link in the navigation pane

10. You will see the source array that has established the connection to this target array

11. Navigate to Protection > Protection Groups. You will see the protection group “PurityVM01:ProtectionGroup” in the Target
Protection Groups pane. You will also see that a snapshot has been replicated in the Target Protection Group Snapshots
pane. Notice the prefix of the snapshot indicates the source array.

30
LAB GUIDE

12. Click on the “PurityVM01:ProtectionGroup” link in the Target Protection Groups pane.

13. You will find the settings of this protection group as they are on the originating array. However here on the “Target” array,
they can not be changed.

31
LAB GUIDE

14. Click the Transfer tab in the Protection Group Snapshots pane and note the amount of data transferred. Progress should
be at 100% (if not, wait until it is).

15. Click on the “PurityVM01:ProtectionGroup.2” in Protection Group Snapshots pane.

16. Notice the snapshot name shows the source array as well as the protection group name, volume group name, and volume
name for each of the three volumes connected to AppServer on PurityVM01.

17. Click on the icon to copy the individual volume snapshot into a new volume in the “Volume Snapshots” popup window.
Snapshots could be copied to new volumes on this “DR” array and mounted to test or validation servers or used for
recovery

18. Cancel the “Copy Snapshot” popup window and close all application windows.

Exercise 7: Near Synchronous Replication with ActiveDR - Estimated Completion Time – 45 minutes
Learning Objectives

• Connect a new volume to Windows Server2

• Configure an ActiveDR Pod

• Create a Pod Replica Link between two arrays to allow near-sync replication

• Verify Functionality

Overview

In this exercise, we will use our async replication connection to enable near-sync replication of a new volume presented to
Windows Server2.

You will need to power on Windows Server2 from your lab portal. Click the Play icon and wait for the server to boot.

32
LAB GUIDE

Access to the array will still be accomplished by logging in to Windows Server1. Credentials are as follows:

Credentials Username Password

Windows Server1 (10.1.1.12) Administrator pureuser

Windows Server2 (10.1.1.22) Administrator pureuser

FlashArray VM1 (10.1.1.11) pureuser pureuser

FlashArray VM2 (10.1.1.21) pureuser pureuser

Task 1: Create a Pod containing a new volume connected to Windows Server2

A pod is a management container containing a group of volumes that can be replicated between two arrays with ActiveDR (or
ActiveCluster). A pod serves as a consistency group that is created for replication purposes. When a pod is replicated between
two arrays, all volumes within the pod will be write-order consistent on the target array. For more information on pods, see the
User Guide.

1. Log in to Windows Server2.

2. Close any windows that appear automatically on boot.

3. Open Chrome and login to FlashArray VM2 using the information in the overview above.

4. Go to Storage > Pods

5. Click on the + icon to add a new Pod.

33
LAB GUIDE

6. Name the pod DR-Pod and click Create.

7. Click on DR-Pod in the “Pods” pane. This will show the details for the Pod. There are currently no volumes in the pod. We
can either create a new volume or move an existing volume into the pod to allow it to be replicated.

8. Click on the + icon to create a new volume using the settings in the screenshot below.

9. Click Create to create the new volume and add it to the DR-Pod in a single step.

10. We will now connect our new volume to Windows Server2. Click the Hosts tab at the top of the Storage window.

11. Click on the + icon to create a new host called Windows2.

12. Click the link for Windows2 in the Hosts pane.

13. Click the ellipsis next to Connected Volumes and click Connect.

14. Select DR-Pod::WinVol1 and click Connect.

15. Click the ellipsis next to Host Ports and click Configure IQNs.

34
LAB GUIDE

16. Click on the Start Menu, type “iscsi” and open the iSCSI Initiator.

17. Click the configuration tab. Note the IQN: ________________________________22

18. Leave this window open as you will return to it shortly.

19. Return to your Chrome window and add the IQN in the “Port IQNs” field.

Task 2: Generate a workload to the new volume connected to Windows Server2

With the host and volume configured properly on the FlashArray, we can now discover the volume and start a workload using
IOMeter on Windows Server2.

1. Return to the iSCSI Initiator Properties window. Click the Targets tab

2. In the Target field, type 10.1.1.20 and click Quick Connect.

3. If you get an error that there were no available targets, check the “Discovered targets” window. If the IQN listed in the
screenshot in step 4 is listed with a status of “Reconnecting,” highlight it and click “Disconnect.”

4. Once the status shows “Inactive” click “Connect” again and proceed to step 5.

5. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.

22
iqn.1991-05.com.microsoft:host-2

35
LAB GUIDE

6. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:

7. Click OK to close the iSCSI Initiator Properties window.

8. Right-click the start menu and click Run.

9. Type diskmgmt.msc and click OK.

10. The Disk Management window will appear.

11. You should see Disk 1 has been detected and is offline. Right-click Disk 1 and click “Online.”

12. Right-click Disk 1 again and click Initialize. Click OK in the popup window.

13. Right-click the partition next to Disk 1 and click New Simple Volume. Click Next to accept all default settings and format
the disk.

14. Once formatting is completed, close the Disk Management window.

15. Double-click the iometer shortcut on the Desktop.

16. Right-click the Host and select “Refresh Target Lists.”

17. Expand the plus next to Host and select Worker 1

18. Check the box next to “E:” in the Targets window to the right.

19. Click the green flag at the top of the window and click “cancel” on the pop-up.

20. Once you see “Run 1 of 1” in the bottom right corner, your workload is running.

36
LAB GUIDE

21. Return to your Chrome browser window.

22. Click Protection > ActiveDR

23. Click on the + icon under the Pod Replica Links section of the Pod configuration:

24. Select DR-Pod as the local pod name and PurityVM01 as the Remote Array.

25. As we have not yet created a Pod on the remote array, click on Create Remote Pod and enter a name for the the Pod on
the remote array (eg, Remote-DR) and click OK:

26. Click Create to create the Replica Link between the two arrays.

37
LAB GUIDE

27. The "status" field for the Pod Replica Link should initially show as 'Baselining' as the initial data sync is completed, after
which it should change to 'replicating'. The 'Lag' field will show the delay between data being written to the primary array
and being replicated to the remote array.

28. Congratulations! You have successfully configured ActiveDR!

Task 3: Connect DR array (PurityVM01) to DR host (Windows Server1) to prepare for failover test
1. Login to Windows Server1, open a browser session to PurityVM01 (10.1.1.11)

2. Log on with the username: pureuser and password: pureuser

3. Go to Storage > Pods and you will see the target Pod has automatically been created on this array.

4. In addition to being able to monitor replication under the Pods tab, there is also a dedicated ActiveDR section under
Protection. Click the Protection menu item at the left of the window, and then click on the ActiveDR heading to view all
ActiveDR configurations:

5. This section shows details such as the status ("replicating") and lag of the replication, the Also shown on the screen is the
fact that the Pod local to this array, Remote-DR, is currently "demoted". The data on volumes in a Demoted pod is not
available to be accessed by a host, although the volumes themselves can be connected to a host in order to minimize the
effort required in the event of a failover. This is what we will do.

38
LAB GUIDE

6. Still on the target array, look at the Volumes (Storage > Volumes) on this array and you'll see the target volume in the pod.

7. When you click the link for this volume, you will see that this volume currently has no Host Connections configured:

8. Click the ellipsis next to Connected Hosts, then select Connect...

9. Select AppServer and click Connect.

10. Right-click the start menu and click Run

11. Type diskmgmt.msc and click OK

12. The Disk Management window will appear

13. Disk Management will show a total of 5 disks plus the CD-ROM (you may need to scroll down to see them all). Disk 0 is
the system’s 30GB boot disk, Disks 1-3 are the AppData volumes, Disk 4 is the newly connected DR volume from the
source array.

14. Right-click Disk 4 and click Online.

39
LAB GUIDE

15. Because this volume is currently in a demoted state on PurityVM01, the disk appears as Read Only to this DR server

16. Right-click the New Volume (H:) partition and click “Open.” You should see the iobw.tst file from the IOMeter test on the
source array (PurityVM02) but you will not be able to create or change any files from this target side.

Task 4: Perform Disaster Recovery Test


1. Login to Windows Server2.

2. Open File Explorer and browse to “New Volume (E:)”

40
LAB GUIDE

3. You will see the iobw.tst file. Right-click any empty area in the window and click New > Text Document, as in the
screenshot below.

4. Name the file SourceFile then open the file by double-clicking on it. Type “This is from the source.” and save the file.

5. Return to Windows Server1.

6. Browse to the H:\ drive. Because this drive has been mounted as Read-only on Windows Server1, you will not see the new
file. This volume is for validation of data at the time it was mounted and will not reflect changes until it is taken offline and
rediscovered.

7. Open Disk Manager again if it is closed.

8. Right-click Disk 4 and click Offline.

41
LAB GUIDE

9. Click “Action” at the top of the window and click Rescan Disks.

10. Now bring Disk 4 online again.

11. Now when you browse, you will see the new SourceFile.txt file.

12. Open SourceFile.txt and add a line “Updated from the target.” and click the x to close the file.

13. Click Save.

14. As expected, we get an error that this disk is write-protected and we can’t update it. Cancel the save operation and close
the file without saving your change.

15. Re-open the Chrome web browser and connect to the GUI for PurityVM01 (10.1.1.11).

16. Login with pureuser / pureuser.

17. Click on Protection on the left menu, and then ActiveDR along the top.

18. The current status of the Pod Replica Link will be shown, which should show the volume as replicating from the Remote
Pod to the Local (Demoted) Pod.

19. In order to perform a DR test we need to 'promote' the target copy of the Pod, without changing the status of the source
Pod. This will cause the array to make the data on the target LUN accessible (read/write) with the current state of the
replicated data. Replication to the target array will continue to occur, however this newly replicated data will not be visible
to the target LUN until after it is again demoted.

20. Select the 3 vertical dots icon beside the Pod you wish to promote, and then select Promote Local Pod...

42
LAB GUIDE

21. Confirm you want to promote the Pod by selecting Promote.

22. After a few seconds the status of the pod should change to Promoted. The Replication status, direction and lag should
remain unchanged as replication is still occurring in the background.

23. Return to the Windows Desktop and open Disk Management again.

24. Right-click Disk 4 and select Offline.

25. Rescan Disks again (Action > Rescan Disks)

26. Now bring Disk 4 back online. Once online, right-click the (H:) partition and click “Open.”

27. Open SourceFile.txt and add the line “Updated from the target.”

28. Save and close the file. This time the save is successful. However, this change will not be reflected in the volume once the
target is again demoted.

29. Continue to test additional failure scenarios listed in the optional task or skip to task 5.

Optional Task: Perform Additional Disaster Recovery Testing

1. At this point there are several additional failure scenarios you can test if you’d like.

2. For example, while PurityVM01 is still promoted, test failover.

3. Create a new file called H:\TargetFile.txt with the content “This is from the target.”

4. Save and close the file.

5. Verify that the file exists on Windows Server1 but not Windows Server2.

6. Demote PurityVM02 and verify the new file gets replicated to Windows Server2 Remember you may need to “Offline” then
“Online” the disk on Windows Server2 to see the changes.

7. You might also consider disabling the replication ports on either side and see what happens to the “Lag” field in the pod.

8. You could also see what happens to the “Mediator” field if you disable the management port.

9. Once you have finished exploring the nearly endless possibilities, proceed to Task 5.

Task 5: Cleanup

1. Once the DR test has been completed we will need to return the system to it's normal state.

2. Return to Windows Server2.

3. Open a browser session to the GUI for PurityVM02 (10.1.1.21).

4. Click Protection > ActiveDR

43
LAB GUIDE

5. Click the ellipsis next to DR-Pod under Pod Replica Links and click Delete.

6. Click Delete in the confirmation window.

7. Click Storage in the navigation pane.

8. Under Array Connections, click the x at the far right next to the connection to PurityVM01 to disconnect the replication
connection between the two arrays.

9. Click Disconnect in the confirmation window.

10. Depending on the optional failure scenarios you tried, you will most likely get an error as seen in the following screenshot.

11. Click Cancel and open a new tab to PurityVM01 (10.1.1.11).

12. Login with pureuser / pureuser

13. Repeat steps 7-9 above to disconnect the arrays.

14. Move on to Exercise 8 to configure ActiveCluster.

44
LAB GUIDE

Exercise 8: Synchronous Replication with ActiveCluster / Estimated Completion Time – 45 minutes


Learning Objectives

• Connect Arrays for synchronous replication

• Configure an ActiveCluster Pod

• Stretch the pod between two arrays to allow synchronous replication

• Present stretched volume to additional server

Overview

In this exercise, we will create a connection between our arrays for synchronous replication. We will pretend that we have two
separate teams in the same campus that need to work off the same dataset. We will use a separate array for each team but
mirror the data between them using ActiveCluster.

We will be using all lab resources for this exercise.

Access to the arrays will initially still be accomplished by logging in to Windows Server1. Credentials are as follows:

Credentials Username Password

Windows Server1 (10.1.1.12) Administrator pureuser

Windows Server2 (10.1.1.22) Administrator pureuser

FlashArray VM1 (10.1.1.11) pureuser pureuser

FlashArray VM2 (10.1.1.21) pureuser pureuser

Task 1: Create synchronous replication connection

1. Log in to Windows Server1 if not already connected.

2. Open two tabs in Chrome, one to PurityVM01 (10.1.1.11) and one to PurityVM02 (10.1.1.21).

3. Log in to both arrays using the default credentials: pureuser/pureuser

4. In the PurityVM02 browser tab click the ellipsis in the Array Connections pane and click “Get Connection Key.”

5. Copy the key, close the window and return to the PurityVM01 browser tab.

6. On PurityVM01, click the + in the Array Connections pane.

45
LAB GUIDE

7. Type the Management Address for PurityVM02: 10.1.1.21

8. Change Type to Sync Replication and paste the connection key from step 5.

9. Click “Connect.”

NOTE: If you get an error that the array can’t connect to 10.1.1.21, cancel and repeat steps 4 through 9.

10. Once the Array Connections pane shows the sync-replication connection, click the Paths tab and verify that all replication
endpoints show “connected.”

11. Return to PurityVM02 and refresh the GUI to watch the sync-replication connection appear. (The “Paths” on PurityVM02
will likely not appear until we’ve created our stretched pod in the next Task.)

Task 2: Configure a stretched pod for synchronous replication

1. Switch to the PurityVM01 GUI, click the Pods tab at the top of the Storage pane.

2. Click the + in the Pods pane and name the Pod AppDataPod.

3. Click Create. The pod should appear as below:

4. See the following excerpt from the User Guide.

“Volumes can be moved into and out of pods. Pods can also contain protection groups with volume members. Pods
cannot contain protection groups with host or host group members.”

5. Because our Protection group was created to include the AppServer host, we cannot simply move our protection group
into our new pod. We will need to destroy the protection group as well as the AppData volume group and add the
individual volumes to our pod.

46
LAB GUIDE

6. Still in the PurityVM01 GUI, Navigate to Protection > Protection Groups

7. Click the ellipsis next to ProtectionGroup in the Source Protection Groups pane.

8. Click Destroy and click Destroy again in the confirmation window.

9. You will now see ProtectionGroup in the Destroyed Protection Groups pane, in case this operation was done in error, we
have 24 hours to undo. Click the trash icon to “eradicate” ProtectionGroup.

10. Click Eradicate in the confirmation window.

11. Navigate to Storage > Volumes and click the AppData link in the Volume Groups pane.

12. Click the ellipsis at the top of the Volumes pane and click “Move Out…”

13. Select all volumes then click “none” at the bottom-left under “Pod or Volume Group.”

14. Click AppDataPod. This allows us to move volumes out of the volume group and into our pod in one step, as seen in the
following screenshot.

47
LAB GUIDE

15. Click Move.

16. Now click the ellipsis next to AppData at the top of the Volumes pane and click Destroy.

17. Click Destroy in the confirmation window. This destroys the AppData volume group, not the volumes themselves. These
have now been moved into the AppDataPod.

18. Click “Destroyed (1)” under the Volume Groups pane and click the trash icon to eradicate AppData and click Eradicate
again in the confirmation window. With our pod now created and populated with volumes, we are now ready to stretch our
pod, enabling sync replication of the data in these volumes.

19. Navigate to Protection > ActiveCluster and click the + in the ActiveCluster Pods pane.

48
LAB GUIDE

20. Select AppDataPod as the Local Pod and PurityVM02 as the Remote Array.

21. Click Stretch.

22. Verify the Status column changes to “online” for both arrays, as seen in the screenshot below. This may take some time
while data is being synchronized. (You may see a status of “offline” then “resyncing” before finally transitioning to “online.”

23. Switch to the PurityVM02 GUI tab.

24. Navigate to Analysis > Replication and verify that data was transmitted.

25. Navigate to Storage > Volumes and confirm that the AppData volumes are all visible on this array.

Task 3: Connect Windows Server2 to the AppData volumes

1. Return to the lab portal and click the Windows Server2 tile.

2. Login using the same method and credentials as Windows Server1.

3. Close any windows that appear automatically on boot.

4. Click on the Start Menu, Type “iscsi” and open the iSCSI Initiator

5. Click the configuration tab. Note the IQN: ________________________________ 23

6. Open Chrome and go to PurityVM02 (10.1.1.21)

7. Log on with the username: pureuser and password: pureuser

8. Click on the “Storage” link in the navigation pane

9. Click on the “Hosts” tab in the Storage pane

23
iqn.1991-05.com.microsoft:host-2

49
LAB GUIDE

10. Click the plus + in the Hosts pane

11. Type “AppServer” as Name and click “Create”

12. Click the link for “AppServer”

13. Click the ellipsis next to the “Connected Volumes” pane and click “Connect”

14. Select all volumes and click “Connect”

15. Click the ellipsis next to the “Host Ports” pane and click “Configure IQNs”

16. Paste the Windows Server2 IQN copied earlier, and click “Add”

17. Return to the iSCSI Initiator Properties window. Click the Targets tab

18. In the Target field, type 10.1.1.20 and click Quick Connect

a. If you get an error that there were no available targets, check the “Discovered targets” window. If the IQN
listed in the screenshot in step 20 is listed with a status of “Reconnecting,” highlight it and click
“Disconnect.”

b. Once the status shows “Inactive” click “Connect” again and proceed to step 21.

19. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.

20. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:

21. Click OK to close the iSCSI Initiator Properties window.

22. Right-click the start menu and click Run

50
LAB GUIDE

23. Type diskmgmt.msc and click OK

24. The Disk Management window will appear

25. You should see Disk 1, Disk 2, and Disk 3 have been detected and are offline. Right-click each disk and click “Online.”
(Notice you do not format the disks because they are the same disks presented to Windows Server1.)

Task 4: Establish a second connection between both Windows Servers and PurityVMs

In this task, we will establish the cross-site connections from our two campus buildings, the Math building to the Science
building. We are establishing the highlighted connections below. Note: Multipathing has already been configured and enabled
for iSCSI targets on the servers.

Math Building Science

Windows Windows

PurityVM01 PurityVM02

1. On Windows Server1, open a browser session to PurityVM01 (10.1.1.11).

2. Click on the “Hosts” tab in the Storage pane.

51
LAB GUIDE

3. Click on the “AppServer” link in Hosts pane.

4. Click the ellipsis next to “Host Ports” pane and click “Configure IQNs.”

5. Paste Windows Server2’s IQN in Port IQNs field, and click “Add.”

6. On Windows Server2, open a browser session to PurityVM02 (10.1.1.21)

7. Click on the “Hosts” tab in the Storage pane

8. Click on the “AppServer” link in Hosts pane

9. Click the ellipsis next to “Host Ports” pane and click “Configure IQNs”

10. Paste Windows Server1’s IQN in Port IQNs field, and click “Add”

11. Open the iSCSI Initiator Properties window. Click the Targets tab

12. In the Target field, type 10.1.1.10 and click Quick Connect

13. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.

14. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:

15. Click OK to close the iSCSI Initiator Properties window.

16. Return to Windows Server1

17. Open the iSCSI Initiator Properties window. Click the Targets tab

18. In the Target field, type 10.1.1.20 and click Quick Connect

19. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.

20. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed.

52
LAB GUIDE

Task 6: Simulate different outages and observe automatic failover

1. On Windows Server1, double-click the iometer shortcut on the Desktop.

2. Right-click the Host and select “Refresh Target Lists.”

3. Expand the plus next to Host and select Worker 1.

4. Check the box next to “E:” in the Targets window to the right.

5. Select Worker 2 then check the box next to “F:” in the Targets window.

6. Select Worker 3 then check the box next to “G:” in the Targets window.

7. Click the green flag at the top of the window and click “cancel” on the pop-up.

8. Once you see “Run 1 of 1” in the bottom right corner, your workload is running

9. Before we observe failover, we will want to select a preferred array. This is the array we would prefer to continue
operating if the replication link and/or mediator is unavailable between them.

10. Open a browser session to PurityVM01 (10.1.1.11).

11. Navigate to Protection > ActiveCluster and click the AppDataPod link.

12. Scroll down to the Details section and click the ellipsis.

13. Click “Add arrays to failover preference”

14. Select PurityVM02 and click Add.

15. You will now see Failover Preference as PurityVM02 in the Details pane on both arrays.

53
LAB GUIDE

16. Return to the Dashboard in the PurityVM01 GUI.

17. Open an additional browser window and go to PurityVM02 (10.1.1.21). Arrange the browser windows side by side for easy
observation, as shown below. You should see the workload is running with reads and mirrored writes being reported in
each array GUI. Writes are mirrored to the peer array while reads stay local.

18. In the PurityVM01 GUI, navigate to Settings > Network.

19. Click the edit icon next to the iSCSi interface.

20. Toggle the “Enabled” switch and click Save.

54
LAB GUIDE

21. If you are satisfied, this concludes the lab exercise. Please stop the iometer workload and close all windows. If you would
like, please feel free to try additional failure scenarios or unscripted operations of any kind to help you gain familiarity and
comfort with management of the FlashArray.

©2020 Pure Storage, the Pure P Logo, and the marks on the Pure Trademark List at https://www.purestorage.com/legal/productenduserinfo.html are trademarks of
Pure Storage, Inc. Other names are trademarks of their respective owners. Use of Pure Storage Products and Programs are covered by End User Agreements, IP,
and other terms, available at: https://www.purestorage.com/legal/productenduserinfo.html and https://www.purestorage.com/patents

The Pure Storage products and programs described in this documentation are distributed under a license agreement restricting the use, copying, distribution, and
decompilation/reverse engineering of the products. No part of this documentation may be reproduced in any form by any means without prior written authorization
from Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described
in this documentation at any time without notice.

THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION
WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO
CHANGE WITHOUT NOTICE.

Pure Storage, Inc.


650 Castro Street, #400
Mountain View, CA 94041

purestorage.com 800.379.PURE

[insert publication number and date here]

You might also like