FlashArray Lab Guide
FlashArray Lab Guide
FlashArray™
Basic Array Management
Version 2
March 23, 2021
LAB GUIDE
Contents
Summary ................................................................................................................................................................................ 3
Audience ................................................................................................................................................................................ 3
Introduction ........................................................................................................................................................................... 4
2
LAB GUIDE
Summary
The purpose of this lab guide is to provide the user with some practice with the management and
configuraton of an FlashArray™. Please use the guide as a starting point to investigate the options
available. Further dcoumentation can be found in the User Guide and on-line help.
Pure Storage recommends that you always have access to the latest information and guides via our
support portal at: https://support.purestorage.com.
Audience
This guide is intended for storage administrators, IT professionals and other interested parties.
3
LAB GUIDE
Introduction
Click the play button at the top-right corner to run FlashArray VM1 and Windows Server1 VMs at the same time. If the VMs are
busy, this means the VMs are not ready. Once they are green and show “Running,” they are ready for use.
Access to the array will be accomplished by logging in to the Windows server. Click on the Windows Server1 VM tile to get
started. Click the Ctrl-Alt-Del button on the top selection bar. Credentials are as follows:
• Username: Administrator
• Password: pureuser
PurityVM01 (10.1.1.11)
• Username: pureuser
• Password: pureuser
Once you are logged in, in some cases the Network settings panel will pop up. Please click No and proceed.
4
LAB GUIDE
• Describe how “Audit Trail” shows the CLI command corresponding to each GUI administrative task
Log-in Credentials
User pureuser
Password pureuser
5
LAB GUIDE
6. You will automatically land in the “DASHBOARD” section of the GUI. This section is divided into a navigation pane, an
alerts pane and a Dashboard pane, as seen below.
7. Open the User Guide by hovering over the Help link in the Navigation Pane.
9. Volumes __________________________________________
14. Close the User Guide and return to the Navigation Pane.
17. In the Capacity section of the Dashboard Pane, note the Total Capacity available.
a. ______________________________
18. The Hardware Health section of the Alerts pane shows a rendering of the array.
19. Note the array model in the text above the rendering ______________________ 4
20. How many controllers are shown and why is this different than a standard FlashArray? ______________________________ 5
1 If you have trouble finding these definitions, navigate to Using the GUI to Administer a FlashArray > Dashboard > Capacity.
5 The virtualized array only has one controller. A physical FlashArray has two controllers connected via Non-Transparent Bridging protocol over PCIe. The VM also has significantly less capacity
than a physical array.
6
LAB GUIDE
2. Once in the “Health” section, note the Raw Capacity available in both T and TB
a. _____________________________________________________________ 6
b. Note the difference between the raw and total capacity? __________________
d. Can you estimate how much Effective Capacity is available assuming a 5:1 data reduction rate?
__________________________________________________ 8
3. Hover over one of the Flash modules. You can turn on the ID light on a flash module, or any other hot-swappable
component, if you need to identify it to someone in the datacenter, for replacement or reseating for example.
4. Hover over the Fibre Channel ports and notice that the last octet of the assigned PWWN matches the physical location of
the port: controller ID and FC port number.
2. Click on the “System” tab within the Settings pane if not already selected
5. Schedule a maintenance window for the next two hours. This will add an alert tag to any automated alerts informing Pure
Storage Support that the alert was generated due to maintenance and can be associated with the preexisting
maintenance case.
6
T = Tebibytes (1024GB) ; TB = Terabytes (1000GB) - All other capacity references in the GUI use T
7
The raw capacity includes space reserved for RAID, Garbage Collection, and metadata.
8
Total Capacity * 5
7
LAB GUIDE
6. Look at the setting for “Array Time”. NTP setup is critical for proper array functionality.
9. Look at “DNS settings”. Defining a DNS server is required for “Phone Home” and “Remote Assist” functionality. A DNS
server must also be defined to have the array managed through “Pure1”, which is a cloud-based management and
monitoring portal. 10
10. Click on the “Users” tab within the Settings pane to see an “Audit Trail”. This audit log shows the CLI commands
corresponding to administrative tasks performed in the GUI.
a. What is the most recent entry in the audit log ________________________________________? 11
• Connect clustered Fibre Channel initiators to shared storage using a Host Group
Overview
In this exercise we will use the following sample scenario: Create an Oracle Real Application Cluster (RAC) with three nodes.
Each node will have shared access to the database volume and the redo logs. Each individual server will also have its own
volume for scratch space. Here is the final state for those who are allergic to step-by-step instructions. For the rest, detailed
steps follow.
9
Defining three NTP servers rather than two is best practice. These three are default values and should be changed to meet customer requirements.
10
A maximum of three DNS servers can be defined
11
The most recent entry should be the command to set the maintenance window
8
LAB GUIDE
6. Click “Create Multiple.” This option allows an administrator to create many similarly named hosts in one step.
1. Click the link for OraSrv01. This will take you to the host configuration window. You’ll see any volumes connected to this
host, the Protection Groups this host belongs to, addresses for the host, and additional details about the host. These
fields are all empty right now. Let’s add the WWNs for this host.
3. Click “Configure WWNs.” The FlashArray supports connectivity from Fibre Channel, iSCSI, or NVMe over Fabrics
(NVMe/oF) initiators.
4. We don’t have any actual FC initiators in this environment but have created 10 virtual initiator ports. The FlashArray will
query the FC NameServer to discover any existing, properly zoned initiator WWNs that have not already been assigned to
a host and display them for selection. Pretending each of our Oracle servers has two FC ports, select the WWNs that end
with 1 and 2 then click Add, as seen in the screenshot.
9
LAB GUIDE
5. You will be taken back to the Host Configuration screen where you will now see the assigned WWNs listed.
6. Click the Hosts tab to return to the list of hosts and repeat the steps to assign WWNs to OraSrv02 (WWNs ending in 03 &
04) and OraSrv03 (WWNs ending in 05 & 06).
7. Your Hosts tab should now look like the screenshot below. What is the protocol being used to communicate with these
hosts? How does the array know this? 12
1. From the Hosts tab, click the plus in the “Host Groups” pane.
12
The Interface column shows “FC” to indicate that the host is connected to the FlashArray via
Fibre Channel. This is because we assigned FC WWNs to the hosts.
10
LAB GUIDE
3. Click the link for the OraRAC host group. Here you will see any hosts that belong to this host group, volumes that are
shared across hosts in the group, and protection groups to which the host group belongs.
5. This brings you to a selection window where you can add hosts that are not currently in a host group. Select the servers
we previously defined (OraSrv01, OraSrv02, and OraSrv03) and click “Add.”
TIP: You can select all available hosts by checking the box at the top of the column.
CHECKPOINT – Ensure that your Hosts window matches the screenshot below. Validate the Host Group and Interface
columns. If your configuration matches, congratulations! You may now proceed to Task 4.
11
LAB GUIDE
6. Now click the plus sign to create another volume but this time
click “Create Multiple.”
7. Just like the option to create multiple hosts, this option allows us
to create many volumes with a similar naming convention and the
same size. Name the volumes OraVol# with a size of 50G. Start at
1 and create 3 with two digits in the name, as seen in the
screenshot here.
8. Click “Create.”
1. Click the Hosts tab. Connecting hosts and volumes can be done from the Volumes tab or the Hosts tab. We will use the
hosts tab.
4. A familiar selection window will appear. Select the OraData and OraRedo volumes.
6. Click “Connect.”
7. This brings you back to the OraRAC configuration window. What are the LUN IDs assigned to the two volumes?
8. OraData: _______
9. OraRedo:_________
We have added the shared access volumes for all three nodes in the RAC cluster. All three hosts will be able to read
and write to these volumes. We now need our private volumes for each server.
3. A familiar window again appears. What is listed in the LUN field? _________________
12
LAB GUIDE
7. Return to the Hosts tab and repeat steps 7-10 for OraSrv02 and OraSrv03 for their corresponding volumes.
CHECKPOINT – Ensure that your Hosts window matches the screenshot below. Validate the “# Volumes” column for
both Hosts and Host Groups panes. If your configuration matches, congratulations! You have completed this exercise
and may move on to Exercise 3.
Overview
In this exercise we will use the following sample scenario: A single Windows server is running an application using iSCSI-
attached storage presented from multiple volumes. We want to show performance for all volumes attached to the server. We
will be using the Windows VM as our application server, IOMeter as the application, volumes presented from the FlashArray VM
as the storage, and a volume group to aggregate the performance statistics. Here is the final state for those who are allergic to
step-by-step instructions. For the rest, detailed steps follow. Since we’ve already created a host and volume connection in the
previous step using the GUI, this guide will walk you through using the CLI to perform these steps this time.
AppServer
IQN:
iqn.1991-05.com.microsoft:host-1 AppData1 AppData2 AppData3
2. In the Hostname field, type the IP Address 10.1.1.11 and click “Open.”
13
By default, shared LUNs (LUNs presented to more than one host) are numbered starting at 254 and decrementing the LUN ID by one for each shared LUN. Private LUNs (LUNs presented to only
one host) are numbered starting at 1 and incrementing by one for each LUN presented to the host.
13
LAB GUIDE
purehelp
6. Type pureman purevol to access the manual page for the purevol command
pureman purevol
7. Type purevol create -h to get syntax help for this command. Manual pages and command help are available for any
command. You can use the -h option to list help for any subcommand and Tab completion also works for most
commands.
purevol create -h
8. Create three volumes, each 100G, called AppData1, AppData2, and AppData3 using the following command: purevol
create AppData1 AppData2 AppData3 --size 100g
9. List your volumes: purevol list and verify the output below:
purevol list
1. In order to create the host, we will need its IQN. Find the server’s IQN by clicking on the Start Menu. Type “iscsi” and open
the iSCSI Initiator.
2. Click the configuration tab. The window should look like this. See screencap.
14
LAB GUIDE
4. Leave this window open in the background, as you will return to it in Task 3. Now return to your Putty session and add the
host using the following command (if different, change the IQN to match what you recorded above): purehost create
AppServer --iqnlist iqn.1991-05.com.microsoft:host-1
5. Connect the host to the volumes you created earlier: purevol connect AppData1 AppData2 AppData3 --host
AppServer
CHECKPOINT – Ensure that your hosts and volumes have been properly connected to your volumes. LUN IDs may
differ slightly, that’s fine. Otherwise, if your configuration matches, congratulations! You may now close your Putty
session and proceed to Task 3.
14
iqn.1991-05.com.microsoft:host-1
15
LAB GUIDE
1. Return to the iSCSI Initiator Properties window. Click the Targets tab.
3. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.
4. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as in the screenshot:
This is a basic iSCSI connection and is all we need for lab purposes.
There is additional information for adding sessions, enabling
multipathing, and other best practices here:
https://support.purestorage.com/Solutions/Microsoft_Platf
orm_Guide/aaa_Quick_Setup_Steps/Step_05.1_--
_Setup_iSCSI_Connectivity
8. The Disk Management window will appear. A window to initialize a new disk may appear. If so, click Cancel.
9. At the top of the Disk Management window, click Action > Rescan Disks
10. Once the rescan is completed, ensure that Disk1, Disk2, and Disk3 all have a state of “Online” or “Not Initialized.” If any are
offline, right-click the disk label and click Online.
11. Once all disks are online, right-click Disk 1 at the far left, then click “Initialize Disk.”
12. In the Initialize Disk window, all three new volumes should be discovered. Leave the default selections and click OK.
13. Once all disks are initialized, right-click the partition next to Disk 1 and click “New Simple Volume” to mount the volume
presented from the FlashArray.
14. Leave all values as default and continue through the prompts. Repeat this process for the two remaining disks.
CHECKPOINT – Ensure that your Disk Management window appears like the screenshot below before closing the
window and moving on to Lab Exercise 4.
16
LAB GUIDE
• Analyze performance
Overview
In this exercise we are going to start running a workload using IOMeter on the Windows server and analyze the performance
statistics using the GUI. Note: Remember that this is a virtual array with cloud-based storage so the performance you see in
this lab exercise will be greatly inferior to the performance you would achieve on a physical FlashArray.
17
LAB GUIDE
4. Check the box next to “E:” in the Targets window to the right.
5. Select Worker 2 then check the box next to “F:” in the Targets window.
6. Select Worker 3 then check the box next to “G:” in the Targets window.
7. Click the green flag at the top of the window and click “cancel” on the pop-up.
8. Once you see “Run 1 of 1” in the bottom right corner, your workload is running.
1. If not already logged in, open a browser window and navigate to 10.1.1.11.
3. You should see performance statistics in the Dashboard view now. This shows the workload currently running from the
Windows server.
4. What are the available intervals for viewing performance statistics in the Dashboard?
_________________________________________________________________ 15
6. What are the maximum and minimum intervals for viewing performance statistics in the Analysis window?
_________________________________________________________________ 16
8. Place the cursor over a point on the performance graph. Note the following:
13. What are the three IO Types next to the interval drop-down?
__________________________________________________________________ 17
15. Place the cursor over a point on the performance graph. Note the additional statistics now available when only the IO type
of “Write” is selected:
15
5 Minutes and 24 Hours
16
Minumum of 5 minutes to a maximum of 1 year of historical performance data.
17
Read, Write, Mirrored Write
18
LAB GUIDE
19. Hover over the Help link in the navigation pane and click the FlashArray User Guide.
20. Click “Using the GUI to Administer a FlashArray” then “Analysis” then “Performance.”
22. Return to the Array GUI tab and click Volumes at the top of the Performance pane.
23. Here you’ll see the same performance statistics available for each individual volume.
24. In the case of our Windows server there are three volumes of interest, our AppData volumes. When we select these three
volumes, we see three lines (Read, Write, Mirrored Write) for each individual volume on the performance graph. We want
to see the aggregated performance for all the volumes connected to our AppServer. This is the main purpose of the
“volume group” container.
28. Click the link for AppData in the Volume Groups pane.
29. Click the ellipsis in the Volumes pane and click “Move In…” as shown in the screenshot.
31. Click “Performance” in the navigation pane again and click Volumes.
32. Change the “Volumes” drop-down to “Volume Groups” and select AppData.
18
SAN time measures latency external to the array, including host latency.
19
LAB GUIDE
33. We now see a single set of lines representing the aggregate performance for all of the volumes in the volume group. You
may need to wait a minute or two for the graph to populate because we will only see statistics from the point the volume
group was created.
34. Return to the IOMeter application window by clicking its icon in the taskbar.
• Identify how snapshots are used on the array to create critical restore points
Overview
In this exercise, we will use the previous sample scenario and create a local snapshot for the AppData1 volume as a critical
restore point for the application server. We will simulate a scenario in which someone accidentally deleted important files from
the AppData1 volume and now needs to restore from a snapshot.
AppServer
IQN:
iqn.1991-05.com.microsoft:host-1 AppData1 AppData2 AppData3
Snapshot
4. Type echo "This is a really important file that I hope never gets deleted!" > g:\important.txt and hit Enter
5. Type for /L %i in (1,1,24) do type g:\important.txt >> g:\important.txt and hit Enter. This operation will take some time to
complete, as it is populating the new g:\important.txt file with roughly 2GB of repeating data. As a result, the AppData1
volume will now have a high deduplication rate and your overall Shared space on the array will increase.
6. If not already logged in, open a browser session to https://10.1.1.11 to access the GUI
20
LAB GUIDE
8. The Capacity pane in the Dashboard will display some “Unique” and “Shared” space consumed, as in the screenshot
below. 19
Do not proceed until you see some volume space reflected in AppData1 volume
19
It may take about 10 minutes for the capacity change to be reflected in the GUI.
21
LAB GUIDE
15. You should see a snapshot created as seen below with zero space consumed. Why does the snapshot size show 0.00? 20
16. You can also create snapshots using the CLI or REST API. The audit log captures all changes to the system in CLI syntax.
See the snapshot example below:
To see the Audit log go to Settings > Access and view the Audit Trail. The CLI omits the “Name” column so the
equivalent CLI command would be purevol snap AppData/AppData1.
1. Now delete the g:\important.txt file. Shed a tear or two, then empty the Recycle Bin.
2. Now go to linkedin.com and update your profile … No, wait! Go to Storage > Volumes and click the link for
AppData/AppData1.
3. See the snapshot you took earlier in the Volume Snapshots pane. Breathe a sigh of relief. Happy dance is optional but if,
like Super Mario at the top of a magic beanstalk, you feel so moved, see screenshot below for reference.
4. Now let’s recover that data! Click the ellipsis next to the latest snapshot in the “Volume Snapshots” pane and click
Restore.
20
This is because the volume’s data has not changed since taking the snapshot. The snapshot only protects the existing data, does not create an additional copy of the existing data.
22
LAB GUIDE
5. Notice the warning. If we had active I/O writing to this volume, a “Restore” would overwrite any changes more recent than
the snapshot. Click Cancel.
6. Instead let’s create a copy (even though we haven’t made any changes since the snapshot). Repeat step 4 but click Copy.
7. Name the copy AppData1_clone and click Copy. (Leave the volume group as AppData.)
8. Go back to the Storage > Volumes window and click the link for your clone.
9. Click the ellipsis in the Connected Hosts pane and connect AppServer.
12. The Disk Management window will appear. You should see Disk 4 (Offline). Right-click the disk label and click Online
23
LAB GUIDE
15. You can now disconnect, destroy, then eradicate the AppData1_clone volume:
19. Attempting to destroy a connected volume will generate the message below.
21. Click the x next to AppServer in the Connected Hosts pane and click Disconnect in the confirmation that pops up.
22. Now repeat steps 18 & 19. Notice at the bottom of the Volumes pane, the number next to Destroyed has changed to (1).
Destroyed volumes are kept in a “pending eradication” state for 24 hours. Once the timer expires, the volume and all its
data (including snapshots) are expunged and no longer available for recovery.
24
LAB GUIDE
24. Click the trash icon to “Eradicate” the volume and click Eradicate in the resultant confirmation window.
Overview
In this exercise, we will use the previous sample scenario and create a local snapshot for the AppData1 volume as a critical
restore point for the application server. We will simulate a scenario in which someone accidentally deleted important files from
the AppData1 volume and now needs to restore from a snapshot.
You will need to power on FlashArray VM2 host from your lab portal. Click the Play icon and wait for the array to boot.
Access to the array will still be accomplished by logging in to Windows Server1. Credentials are as follows:
25
LAB GUIDE
In this exercise, we would like to provide remote protection for all volumes connected to AppServer. We have added a second
array (normally at a DR or test site) as a replication target.
1. If not already logged in, open a browser session to https://10.1.1.11 to access the GUI
8. Click the ellipsis next to the “Members” pane. What are the options? _____________________________________________________ 21
Hosts, Host Groups, Volumes – Volumes will include only the specified volumes in the snapshot schedule. Hosts will include current and future volumes connected to the specified host. Host
21
Groups will include current and future volumes connected to all current and future Hosts in the specified Host Group.
26
LAB GUIDE
2. Enable Snapshot Schedule, leaving all values default, as seen below and click “Save”
3. Please wait for 10-20 seconds until PurityVM01 creates its first snapshot of ProtectionGroup. This happens automatically
as a result of enabling the Snapshot Schedule. New snapshots can be seen in the Protection Group Snapshots pane.
4. Click the link for ProtectionGroup.1. You will see the snapshots of all volumes connected to AppServer as follows:
27
LAB GUIDE
Task3: Establish a connection to a second FlashArray for the purpose of Asynchronous Replication.
2. Click Advanced Options and continue to the site as done previously for PurityVM01.
5. Click on the “Array” tab in the Storage pane if not already there
6. Click the ellipsis in the Array Connections pane and click Get Connection Key.
7. The connection key is a globally unique identifier for this specific array, ensuring that the target array matches the
intended target. Click Copy, then OK.
28
LAB GUIDE
10. Click on the “Array” tab in the Storage pane if not already there.
14. If the network between the two arrays is connected, PurityVM02 will now appear with a “connected” status in the Array
connections pane.
1. On PurityVM01 (10.1.1.11)
29
LAB GUIDE
7. Enable Replication Schedule, leave all values default, and click “Save”
10. You will see the source array that has established the connection to this target array
11. Navigate to Protection > Protection Groups. You will see the protection group “PurityVM01:ProtectionGroup” in the Target
Protection Groups pane. You will also see that a snapshot has been replicated in the Target Protection Group Snapshots
pane. Notice the prefix of the snapshot indicates the source array.
30
LAB GUIDE
12. Click on the “PurityVM01:ProtectionGroup” link in the Target Protection Groups pane.
13. You will find the settings of this protection group as they are on the originating array. However here on the “Target” array,
they can not be changed.
31
LAB GUIDE
14. Click the Transfer tab in the Protection Group Snapshots pane and note the amount of data transferred. Progress should
be at 100% (if not, wait until it is).
16. Notice the snapshot name shows the source array as well as the protection group name, volume group name, and volume
name for each of the three volumes connected to AppServer on PurityVM01.
17. Click on the icon to copy the individual volume snapshot into a new volume in the “Volume Snapshots” popup window.
Snapshots could be copied to new volumes on this “DR” array and mounted to test or validation servers or used for
recovery
18. Cancel the “Copy Snapshot” popup window and close all application windows.
Exercise 7: Near Synchronous Replication with ActiveDR - Estimated Completion Time – 45 minutes
Learning Objectives
• Create a Pod Replica Link between two arrays to allow near-sync replication
• Verify Functionality
Overview
In this exercise, we will use our async replication connection to enable near-sync replication of a new volume presented to
Windows Server2.
You will need to power on Windows Server2 from your lab portal. Click the Play icon and wait for the server to boot.
32
LAB GUIDE
Access to the array will still be accomplished by logging in to Windows Server1. Credentials are as follows:
A pod is a management container containing a group of volumes that can be replicated between two arrays with ActiveDR (or
ActiveCluster). A pod serves as a consistency group that is created for replication purposes. When a pod is replicated between
two arrays, all volumes within the pod will be write-order consistent on the target array. For more information on pods, see the
User Guide.
3. Open Chrome and login to FlashArray VM2 using the information in the overview above.
33
LAB GUIDE
7. Click on DR-Pod in the “Pods” pane. This will show the details for the Pod. There are currently no volumes in the pod. We
can either create a new volume or move an existing volume into the pod to allow it to be replicated.
8. Click on the + icon to create a new volume using the settings in the screenshot below.
9. Click Create to create the new volume and add it to the DR-Pod in a single step.
10. We will now connect our new volume to Windows Server2. Click the Hosts tab at the top of the Storage window.
13. Click the ellipsis next to Connected Volumes and click Connect.
15. Click the ellipsis next to Host Ports and click Configure IQNs.
34
LAB GUIDE
16. Click on the Start Menu, type “iscsi” and open the iSCSI Initiator.
19. Return to your Chrome window and add the IQN in the “Port IQNs” field.
With the host and volume configured properly on the FlashArray, we can now discover the volume and start a workload using
IOMeter on Windows Server2.
1. Return to the iSCSI Initiator Properties window. Click the Targets tab
3. If you get an error that there were no available targets, check the “Discovered targets” window. If the IQN listed in the
screenshot in step 4 is listed with a status of “Reconnecting,” highlight it and click “Disconnect.”
4. Once the status shows “Inactive” click “Connect” again and proceed to step 5.
5. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.
22
iqn.1991-05.com.microsoft:host-2
35
LAB GUIDE
6. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:
11. You should see Disk 1 has been detected and is offline. Right-click Disk 1 and click “Online.”
12. Right-click Disk 1 again and click Initialize. Click OK in the popup window.
13. Right-click the partition next to Disk 1 and click New Simple Volume. Click Next to accept all default settings and format
the disk.
18. Check the box next to “E:” in the Targets window to the right.
19. Click the green flag at the top of the window and click “cancel” on the pop-up.
20. Once you see “Run 1 of 1” in the bottom right corner, your workload is running.
36
LAB GUIDE
23. Click on the + icon under the Pod Replica Links section of the Pod configuration:
24. Select DR-Pod as the local pod name and PurityVM01 as the Remote Array.
25. As we have not yet created a Pod on the remote array, click on Create Remote Pod and enter a name for the the Pod on
the remote array (eg, Remote-DR) and click OK:
26. Click Create to create the Replica Link between the two arrays.
37
LAB GUIDE
27. The "status" field for the Pod Replica Link should initially show as 'Baselining' as the initial data sync is completed, after
which it should change to 'replicating'. The 'Lag' field will show the delay between data being written to the primary array
and being replicated to the remote array.
Task 3: Connect DR array (PurityVM01) to DR host (Windows Server1) to prepare for failover test
1. Login to Windows Server1, open a browser session to PurityVM01 (10.1.1.11)
3. Go to Storage > Pods and you will see the target Pod has automatically been created on this array.
4. In addition to being able to monitor replication under the Pods tab, there is also a dedicated ActiveDR section under
Protection. Click the Protection menu item at the left of the window, and then click on the ActiveDR heading to view all
ActiveDR configurations:
5. This section shows details such as the status ("replicating") and lag of the replication, the Also shown on the screen is the
fact that the Pod local to this array, Remote-DR, is currently "demoted". The data on volumes in a Demoted pod is not
available to be accessed by a host, although the volumes themselves can be connected to a host in order to minimize the
effort required in the event of a failover. This is what we will do.
38
LAB GUIDE
6. Still on the target array, look at the Volumes (Storage > Volumes) on this array and you'll see the target volume in the pod.
7. When you click the link for this volume, you will see that this volume currently has no Host Connections configured:
13. Disk Management will show a total of 5 disks plus the CD-ROM (you may need to scroll down to see them all). Disk 0 is
the system’s 30GB boot disk, Disks 1-3 are the AppData volumes, Disk 4 is the newly connected DR volume from the
source array.
39
LAB GUIDE
15. Because this volume is currently in a demoted state on PurityVM01, the disk appears as Read Only to this DR server
16. Right-click the New Volume (H:) partition and click “Open.” You should see the iobw.tst file from the IOMeter test on the
source array (PurityVM02) but you will not be able to create or change any files from this target side.
40
LAB GUIDE
3. You will see the iobw.tst file. Right-click any empty area in the window and click New > Text Document, as in the
screenshot below.
4. Name the file SourceFile then open the file by double-clicking on it. Type “This is from the source.” and save the file.
6. Browse to the H:\ drive. Because this drive has been mounted as Read-only on Windows Server1, you will not see the new
file. This volume is for validation of data at the time it was mounted and will not reflect changes until it is taken offline and
rediscovered.
41
LAB GUIDE
9. Click “Action” at the top of the window and click Rescan Disks.
11. Now when you browse, you will see the new SourceFile.txt file.
12. Open SourceFile.txt and add a line “Updated from the target.” and click the x to close the file.
14. As expected, we get an error that this disk is write-protected and we can’t update it. Cancel the save operation and close
the file without saving your change.
15. Re-open the Chrome web browser and connect to the GUI for PurityVM01 (10.1.1.11).
17. Click on Protection on the left menu, and then ActiveDR along the top.
18. The current status of the Pod Replica Link will be shown, which should show the volume as replicating from the Remote
Pod to the Local (Demoted) Pod.
19. In order to perform a DR test we need to 'promote' the target copy of the Pod, without changing the status of the source
Pod. This will cause the array to make the data on the target LUN accessible (read/write) with the current state of the
replicated data. Replication to the target array will continue to occur, however this newly replicated data will not be visible
to the target LUN until after it is again demoted.
20. Select the 3 vertical dots icon beside the Pod you wish to promote, and then select Promote Local Pod...
42
LAB GUIDE
22. After a few seconds the status of the pod should change to Promoted. The Replication status, direction and lag should
remain unchanged as replication is still occurring in the background.
23. Return to the Windows Desktop and open Disk Management again.
26. Now bring Disk 4 back online. Once online, right-click the (H:) partition and click “Open.”
27. Open SourceFile.txt and add the line “Updated from the target.”
28. Save and close the file. This time the save is successful. However, this change will not be reflected in the volume once the
target is again demoted.
29. Continue to test additional failure scenarios listed in the optional task or skip to task 5.
1. At this point there are several additional failure scenarios you can test if you’d like.
3. Create a new file called H:\TargetFile.txt with the content “This is from the target.”
5. Verify that the file exists on Windows Server1 but not Windows Server2.
6. Demote PurityVM02 and verify the new file gets replicated to Windows Server2 Remember you may need to “Offline” then
“Online” the disk on Windows Server2 to see the changes.
7. You might also consider disabling the replication ports on either side and see what happens to the “Lag” field in the pod.
8. You could also see what happens to the “Mediator” field if you disable the management port.
9. Once you have finished exploring the nearly endless possibilities, proceed to Task 5.
Task 5: Cleanup
1. Once the DR test has been completed we will need to return the system to it's normal state.
43
LAB GUIDE
5. Click the ellipsis next to DR-Pod under Pod Replica Links and click Delete.
8. Under Array Connections, click the x at the far right next to the connection to PurityVM01 to disconnect the replication
connection between the two arrays.
10. Depending on the optional failure scenarios you tried, you will most likely get an error as seen in the following screenshot.
44
LAB GUIDE
Overview
In this exercise, we will create a connection between our arrays for synchronous replication. We will pretend that we have two
separate teams in the same campus that need to work off the same dataset. We will use a separate array for each team but
mirror the data between them using ActiveCluster.
Access to the arrays will initially still be accomplished by logging in to Windows Server1. Credentials are as follows:
2. Open two tabs in Chrome, one to PurityVM01 (10.1.1.11) and one to PurityVM02 (10.1.1.21).
4. In the PurityVM02 browser tab click the ellipsis in the Array Connections pane and click “Get Connection Key.”
5. Copy the key, close the window and return to the PurityVM01 browser tab.
45
LAB GUIDE
8. Change Type to Sync Replication and paste the connection key from step 5.
9. Click “Connect.”
NOTE: If you get an error that the array can’t connect to 10.1.1.21, cancel and repeat steps 4 through 9.
10. Once the Array Connections pane shows the sync-replication connection, click the Paths tab and verify that all replication
endpoints show “connected.”
11. Return to PurityVM02 and refresh the GUI to watch the sync-replication connection appear. (The “Paths” on PurityVM02
will likely not appear until we’ve created our stretched pod in the next Task.)
1. Switch to the PurityVM01 GUI, click the Pods tab at the top of the Storage pane.
2. Click the + in the Pods pane and name the Pod AppDataPod.
“Volumes can be moved into and out of pods. Pods can also contain protection groups with volume members. Pods
cannot contain protection groups with host or host group members.”
5. Because our Protection group was created to include the AppServer host, we cannot simply move our protection group
into our new pod. We will need to destroy the protection group as well as the AppData volume group and add the
individual volumes to our pod.
46
LAB GUIDE
7. Click the ellipsis next to ProtectionGroup in the Source Protection Groups pane.
9. You will now see ProtectionGroup in the Destroyed Protection Groups pane, in case this operation was done in error, we
have 24 hours to undo. Click the trash icon to “eradicate” ProtectionGroup.
11. Navigate to Storage > Volumes and click the AppData link in the Volume Groups pane.
12. Click the ellipsis at the top of the Volumes pane and click “Move Out…”
13. Select all volumes then click “none” at the bottom-left under “Pod or Volume Group.”
14. Click AppDataPod. This allows us to move volumes out of the volume group and into our pod in one step, as seen in the
following screenshot.
47
LAB GUIDE
16. Now click the ellipsis next to AppData at the top of the Volumes pane and click Destroy.
17. Click Destroy in the confirmation window. This destroys the AppData volume group, not the volumes themselves. These
have now been moved into the AppDataPod.
18. Click “Destroyed (1)” under the Volume Groups pane and click the trash icon to eradicate AppData and click Eradicate
again in the confirmation window. With our pod now created and populated with volumes, we are now ready to stretch our
pod, enabling sync replication of the data in these volumes.
19. Navigate to Protection > ActiveCluster and click the + in the ActiveCluster Pods pane.
48
LAB GUIDE
20. Select AppDataPod as the Local Pod and PurityVM02 as the Remote Array.
22. Verify the Status column changes to “online” for both arrays, as seen in the screenshot below. This may take some time
while data is being synchronized. (You may see a status of “offline” then “resyncing” before finally transitioning to “online.”
24. Navigate to Analysis > Replication and verify that data was transmitted.
25. Navigate to Storage > Volumes and confirm that the AppData volumes are all visible on this array.
1. Return to the lab portal and click the Windows Server2 tile.
4. Click on the Start Menu, Type “iscsi” and open the iSCSI Initiator
23
iqn.1991-05.com.microsoft:host-2
49
LAB GUIDE
13. Click the ellipsis next to the “Connected Volumes” pane and click “Connect”
15. Click the ellipsis next to the “Host Ports” pane and click “Configure IQNs”
16. Paste the Windows Server2 IQN copied earlier, and click “Add”
17. Return to the iSCSI Initiator Properties window. Click the Targets tab
18. In the Target field, type 10.1.1.20 and click Quick Connect
a. If you get an error that there were no available targets, check the “Discovered targets” window. If the IQN
listed in the screenshot in step 20 is listed with a status of “Reconnecting,” highlight it and click
“Disconnect.”
b. Once the status shows “Inactive” click “Connect” again and proceed to step 21.
19. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.
20. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:
50
LAB GUIDE
25. You should see Disk 1, Disk 2, and Disk 3 have been detected and are offline. Right-click each disk and click “Online.”
(Notice you do not format the disks because they are the same disks presented to Windows Server1.)
Task 4: Establish a second connection between both Windows Servers and PurityVMs
In this task, we will establish the cross-site connections from our two campus buildings, the Math building to the Science
building. We are establishing the highlighted connections below. Note: Multipathing has already been configured and enabled
for iSCSI targets on the servers.
Windows Windows
PurityVM01 PurityVM02
51
LAB GUIDE
4. Click the ellipsis next to “Host Ports” pane and click “Configure IQNs.”
5. Paste Windows Server2’s IQN in Port IQNs field, and click “Add.”
9. Click the ellipsis next to “Host Ports” pane and click “Configure IQNs”
10. Paste Windows Server1’s IQN in Port IQNs field, and click “Add”
11. Open the iSCSI Initiator Properties window. Click the Targets tab
12. In the Target field, type 10.1.1.10 and click Quick Connect
13. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.
14. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed as the following:
17. Open the iSCSI Initiator Properties window. Click the Targets tab
18. In the Target field, type 10.1.1.20 and click Quick Connect
19. You will see a window confirming that the target was discovered and is now connected. Click Done in this window.
20. You should now be back in the iSCSI Initiator Properties window with a Discovered target listed.
52
LAB GUIDE
4. Check the box next to “E:” in the Targets window to the right.
5. Select Worker 2 then check the box next to “F:” in the Targets window.
6. Select Worker 3 then check the box next to “G:” in the Targets window.
7. Click the green flag at the top of the window and click “cancel” on the pop-up.
8. Once you see “Run 1 of 1” in the bottom right corner, your workload is running
9. Before we observe failover, we will want to select a preferred array. This is the array we would prefer to continue
operating if the replication link and/or mediator is unavailable between them.
11. Navigate to Protection > ActiveCluster and click the AppDataPod link.
12. Scroll down to the Details section and click the ellipsis.
15. You will now see Failover Preference as PurityVM02 in the Details pane on both arrays.
53
LAB GUIDE
17. Open an additional browser window and go to PurityVM02 (10.1.1.21). Arrange the browser windows side by side for easy
observation, as shown below. You should see the workload is running with reads and mirrored writes being reported in
each array GUI. Writes are mirrored to the peer array while reads stay local.
54
LAB GUIDE
21. If you are satisfied, this concludes the lab exercise. Please stop the iometer workload and close all windows. If you would
like, please feel free to try additional failure scenarios or unscripted operations of any kind to help you gain familiarity and
comfort with management of the FlashArray.
©2020 Pure Storage, the Pure P Logo, and the marks on the Pure Trademark List at https://www.purestorage.com/legal/productenduserinfo.html are trademarks of
Pure Storage, Inc. Other names are trademarks of their respective owners. Use of Pure Storage Products and Programs are covered by End User Agreements, IP,
and other terms, available at: https://www.purestorage.com/legal/productenduserinfo.html and https://www.purestorage.com/patents
The Pure Storage products and programs described in this documentation are distributed under a license agreement restricting the use, copying, distribution, and
decompilation/reverse engineering of the products. No part of this documentation may be reproduced in any form by any means without prior written authorization
from Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described
in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION
WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO
CHANGE WITHOUT NOTICE.
purestorage.com 800.379.PURE