HPE Alletra 9000 - Parts Support Guide
HPE Alletra 9000 - Parts Support Guide
Published: 2021
Edition: Internal
HPE Alletra 9000 - Parts Support Guide
Abstract
This document will help you identify parts of the product and also gives a step-by-step instruction on how to remove and replace the
components.
Published: 2021
Edition: Internal
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and
services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are
licensed to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is
not responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Intel® , Itanium ® , Optane™, Pentium® , Xeon® , Intel Inside ® , and the Intel Inside logo are trademarks of Intel Corporation in the U.S. and other
countries.
AMD and the AMD EPYC ™ and combinations thereof are trademarks of Advanced Micro Devices, Inc.
Microsoft ® and Windows ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other
countries.
Revision history
Create:
User account
Onboard device
Flooring requirements
A location that has enough room to unpack and assemble your storage system, or room for the pallet and ramp with clearance if
your storage system arrives in a rack
A designated final location for the rack that includes at least 30 inches of clearance from the rear of the rack for access and
serviceability
Power requirements
Two distinct power sources to support reliability through redundancy
Network requirements
Redundant connections: every storage system node should have its own connection to the network
Verify that mDNS support is available if an Apple macOS or Linux system is used for discovery and initialization of storage
system
Configure ports:
- 80
- 2222 (for HPE Storage Central and Support Secure Tunnel Connection)
- 5353 (for mDNS, if an Apple macOS or Linux system used for initialization)
Configure firewall and proxy server to include the HPE server host names and IP addresses listed in latest documentation.
Host requirements.
Every host should have redundant connections to adapter cards, identically installed in each node. For example, both nodes in a
node-pair will have identical adapter cards. Each card should have its own cable to the same host or switch fabric if used.
For a Microsoft Windows system, download and install the HPE Discovery Tool from the HPE Software Depot (link available
from the Resource menu in the top right)
For Field Integrated Configurations without pre-assembly, download and install the HPE Cabling Tool from the HPE Software
Center.
If you haven’t already, obtain an HPE Passport account and register with HPE InfoSight. Connecting your system to InfoSight will
provide you with customized support monitoring, alerts, and updates.
Locate and fill in as much as you can of the Initialization Worksheet. It’s included in planning and preparing for the HPE Storage System.
This will include system name, password, IP address, network settings, and system support contacts. It will also ask for your InfoSight
System Group.
Once arrived, your HPE Storage System needs to be acclimatized before you install and apply power to it. Follow the procedure in your
documentation. Wait until the temperature difference between the site and your storage system is less than 36 degrees Fahrenheit and
that all condensation has evaporated. Commonly, waiting twenty-four hours is enough, but it can take longer. Don’t unpack the system
until you are sure the temperature difference is within normal range and confirm that there is no condensation.
While you are waiting for your storage system to acclimatize, it’s a good time to set up the personal computer that you’ve chosen to
initialize your storage system.
Site Preparation 7
Precautions
To prevent damage to the unit, protect data, and avoid personal injury, review and follow these precautions.
Electrical safety
Follow basic electrical safety precautions to protect yourself from harm and to protect the storage array from damage:
Locate the power switch on the array and the emergency power switch of the room so that you can quickly stop power to the system
if an electrical issue occurs.
When working with high-voltage components or exposed electrical circuits, have another person who is familiar with emergency
power-off locations nearby to switch off the power, if necessary.
Use an approved power cord with a grounded plug for the power supply and plug the power cord into a grounded electrical outlet.
When connecting power to power supplies, install the power supply before connecting the power cable to it. When disconnecting
power to power supplies, unplug the power cable before removing the power supply.
Unplug the power cord before removing a power supply from the chassis.
In the rare event that you must shut down an array for maintenance, such as chassis replacement, power down the system and unplug
the power cords from all power supplies.
System safety
Electrostatic discharge (ESD) can damage system components. To avoid damage to equipment from electrostatic discharge:
Prepare an ESD work surface by placing an antistatic mat on the floor or on a table near the storage system. Attach the ground lead of
the mat to an unpainted surface of the rack.
Always use an ESD wrist strap when touching system components. Attach the grounding strap clip directly to an unpainted surface of
the rack.
Keep each component in its antistatic package until you are ready to install it.
Avoid contact between electronic components and clothing. Even if you are wearing a wrist strap, your clothing may still retain a
charge.
Precautions 8
Tools and Materials
Tools used in installation of factory-integrated configuration.
Ensure that you have the tools you need before you begin. These should include:
Scissors or snips
Adjustable wrench
Level
The HPE Alletra 9000 Storage System is designed for reliability through hardware redundancy. Briefly, your system includes:
A four-way node enclosure, housing either a single pair of nodes or a double pair of nodes, with an equal number of Power Cooling
Battery Modules, PCBMs.
Each node-pair includes two CPUs per node, 1700-Watt PCBMs, and up to three adapter cards per node.
Up to two Alletra 2240 drive enclosures per node-pair, with each drive enclosure containing two 1700-Watt Power Cooling
Modules, PCMs, and two Alletra 2240 I/O modules.
Installation configurations
Your new HPE Storage System will arrive in one of three configurations:
Factory integrated, which arrives fully cabled, tested, and installed in an HPE rack.
Field integrated with some pre-assembly. Enclosures arrive with all components installed. Install the enclosures into your rack.
NVMe over Fabric data cabling to NVMe drive enclosures will be part of a future release.
The third configuration, Field integrated, arrives with less pre-assembly. In this case, enclosures arrive with some components
installed, but drives, adapter cards, cables, bezels and hardware kits arrive in separate boxes. Labels on label sheets are included
but not attached to the cables.
Site preparation
Before your system arrives, prepare your site.
Review and confirm that your site meets the requirements listed in the planning and preparing documentation. If the system for
initializing your Storage System runs Microsoft Windows, download and install the HPE Discovery Tool from HPE Software Depot; If the
system runs Apple macOS or Linux, you won’t need the HPE Discovery Tool. Have your network administrator review network
requirements and open the appropriate ports.
You will need to perform Storage Central Prework.
This involves:
Connecting to HPE Software Center using your HPE Passport.
Creating a user account, a Company Cloud account, and additional steps detailed in your documentation.
Review and fill in the information requested in your Initialization Worksheet.
Once arrived, your HPE Storage System needs to be acclimatized before you install and apply power to it. While it depends upon your
location and environment, it is generally good to wait 24 hours and confirm there is no condensation present before proceeding to
power up your storage system.
Have all your installation and configuration documentation handy.
Installation Overview 10
Factory Integrated in an HPE Rack
For a Factory Integrated configuration, your system arrives preconfigured, in a rack, carefully packed and fastened to a pallet.
Shown is the depalleting process for a different HPE Storage System. The procedures are the same for the depalleting of your system.
Review precautions and safety procedures when depalleting the rack.
NOTE: When positioning the pallet consider carefully the side where the ramp will go and making sure there is enough
clearance after the rack is rolled down and off of the ramp.
WARNING! The system is tall, very heavy, and requires a minimum of three people to safely roll the system from the pallet down the
included ramp, and onto a level floor. To avoid potential bodily injury or death, do not stand in front of the rack while it is being rolled.
For this type of installation, you will need scissors or snips; 13 mm (1/2”) and 17 mm (11/16”) wrenches (socket wrenches are
recommended); an adjustable wrench, and a level. For some racks, you may also need a 6 mm Allen wrench.
1. Cut and remove the banding that secures the outer cardboard walls and top.
3. Grasp and remove the six white plastic pull-tabs that secure the outer cardboard walls to each other.
4. Lift and remove both of the outer cardboard walls and set them aside.
5. Very carefully, cut open the plastic shrink-wrap around the rack.
8. Remove and set nearby the box that contains the ramp assembly kit and hardware for the rack.
9. Remove the large ESD bag that covers the rack and set aside.
Next comes removing the four “L” brackets that secure the rack in the front and back to the pallet.
10. Starting at the front, remove the two 13 mm (1/2”) bolts that secures the “L” bracket to the rack.
NOTE: In some racks, a guide pin for the front door may need to be temporarily removed using a 6 mm Allen
wrench to completely remove the “L” bracket.
11. Remove the 17 mm (11/16”) bolt that secures the “L” bracket to the pallet.
15. Confirm that the leveling feet are raised to provide sufficient clearance for removing the rack from the pallet. If they are not fully
raised, then use an adjustable wrench to loosen the upper locking nut, and then turn the foot counterclockwise until fully raised.
16. Close and secure the rack front and rear doors.
Now, it’s time to unpack the two ramps and four wooden supports from the ramp assembly kit.
18. Position the ramps with the single arrow-marked ramp on the left and the double arrow-marked ramp on the right. The guide strips
must be on the inside of both installed ramps.
19. Insert each ramp’s metal bracket posts into the mounting holes in the pallet. Then press firmly with your foot to secure the ramps to
the pallet.
CAUTION! To avoid bodily injury or damage to the equipment, install the wooden ramp supports underneath the ramps. They
prevent the ramps from collapsing or causing the rack to tip as it is moved down the ramps.
20. Install support “A” beneath the general area marked “A” on the ramp, and do the same for support “B”.
21. Insert the support beneath the ramp where the bottom of the support touches the ground and the Velcro on the top of the support
is secure to the Velcro underneath the ramp.
CAUTION! The wooden supports are beveled. Be sure that the angle of the wooden supports matches the angle of the ramp.
CAUTION! When unloading the rack from the pallet, always use at least three people and do not stand in front of the rack.
CAUTION! Ensure that sufficient clearance exists in front of where the rack will be unloaded, to allow room for the rack to gently
roll to a halt after rolling down the ramp.
23. Each person must grasp the rack corners with two people guiding the cabinet down the ramp while a third person slowly pushes the
rack from behind.
NOTE: Based on the weight of cabinet it may be necessary to have both people on the sides carefully push the rack
until it is completely on the ramp; then adjust to guiding the rack the rest of the way down the ramp and onto the
floor.
25. Position leveling pads, which came in the rack hardware package, underneath each leveling foot. Lower the leveling foot onto the
pad.
26. Use a level tool to accurately confirm that your storage system is level. Raise the locking nut.
27. Use the adjustable wrench to turn the leveling foot until the caster is slightly off the ground, so that the weight of the rack rests on
the pad instead of the caster.
30. Use a level tool to accurately confirm that your storage system is level.
For all Field Integrated configurations your system components arrive packed in boxes. You will need to install Rail Shelves, and then
the components into a standard EIA-310, 19-inch rack with square mounting holes.
Nearby, prepare an anti-static surface to place components on as you unpack them. Have an ESD grounding strap available.
You’ll need a Torx T-15 and a T-25 screwdriver.
A server lift is recommended for installing your node enclosure.
Decide where in the rack to install your Storage System.
Consider node enclosure heights and positioning for your configuration.
NOTE: 4U rail shelves look similar to the 2U rails shelves but are in fact a bit thicker. The 4U rail shelves have a matte
finish, while the 2U rail shelves have a shiny finish.
Each rail shelf is labeled as Left or Right, along with the designation of Front or Rear, and includes illustrations for location of safety
screw locations and how the safety clips work.
For each rail:
Align the front end with the chosen starting point. Push the clip and guide pins through the rack holes until the black locking clip
snaps into place.
Expand the rail to align and connect to the rear end of the rack post.
Confirm that each rail shelf is pulled tight and solidly attached.
WARNING! Verify that the rails are securely latched by attempting to push the black locking clips through the rack holes without
compressing them. If they push through, re-install the rail until you are unable to push them through.
For a 4U enclosure, we need to count up four holes from the top of the front end of each shelf and install snap-on M5 cage nuts.
WARNING! Before installing any hardware on the rails, verify that both ends of each rail are secured with the included safety screws.
And, if applicable, hold-down brackets. If the safety screws are not securely tightened before an enclosure is inserted, the rails may
disengage, damaging the equipment or causing personal harm.
Install a Torx T-25 safety screw to the front of each rail shelf just below the guide pin.
Install either a Torx T-25 safety screw to the rear of each rail shelf OR a hold-down bracket.
If you do not intend to transport the rack with the system installed, insert and tighten a rail safety screw into the rear rack hole of each
rail as shown. The package may contain extra screws.
The placement of the rear safety screw differs on the right and left rail shelf. Refer to the diagram on the rail shelf.
If the RETMA rails are exactly 29 inches apart, as in an HPE factory-integrated rack, and you intend to transport the rack with the
system installed, you need to install hold-down brackets instead of safety screws to secure the rear of the rails.
Install the hold-down brackets and secure to the rear of the rail shelf with two Torx T25 captive screws.
Repeat the process until all the left and right rail shelves are installed for your storage system.
If you are installing a drive enclosure, install a pair of 2U rail shelves.
For a 2-node system, install up to two drive enclosures directly below the node enclosure. For a 4-node system, install one above and
one below, or two above and two below. Each node-pair should have the same number of drive shelves.
With the rail shelves installed and secured, the enclosures are next.
Your node enclosure arrives with controller nodes and Power Cooling Battery Modules, PCBMs already installed. If ordered with some
pre-assembly, your drives and adapter cards will also already be installed.
WARNING! Enclosures are heavy. HPE recommends using a server lift to install the enclosures into your rack. If a server lift is not
available four people are required to lift and install the 4U enclosure.
NOTE: To reduce the weight, the PCBMs can be temporarily removed and replaced after the node enclosure has been
installed.
From the front of the rack, lift, align, and slide in the node enclosure onto the appropriate rail shelf.
NOTE: Do not lift the chassis by the PCBM handles or, if the PCBMs are not installed, the empty PCBM bays.
Tighten three Torx T-25 hold-down screws on each side to secure the enclosure to the rack, two captive screws into the rack shelf and
one through the cage nut added earlier.
At the rear of the enclosure, if hold-down brackets are installed, insert and tighten the two Torx T-15 screws to further secure the
enclosure to the hold-down brackets on each side of the enclosure.
If your installation includes drive enclosures, from the front of the rack, lift, align, and slide each one onto their rail shelves.
At the front, tighten the four captive Torx T-25 thumbscrews, two on each side.
And if hold-down brackets are installed at the rear, insert and tighten the two Torx T15 screws to further secure the enclosure to the
hold-down brackets on each side of the enclosure.
After all enclosures are installed, install the left and right ears on both sides of each enclosure.
Some configurations will come with drives installed, some will not.
If your drives came pre-installed, install your front enclosure bezel. Toe-in the right end of the enclosure bezel, squeeze the retention
latch on the other end and gently but firmly press the bezel into place.
The Four-port 10-25 Gigabit iSCSI and Ethernet Converged Network Adapter with an SFP28 interface.
The Four-port 10 Gigabit (10G base T) iSCSI and Ethernet Converged Network Adapter with an RJ45 Interface.
And coming soon... the two-port CX5 100Gigabit Ethernet adapter for NVMe over Fabric. This will be available at a future release to
accommodate expansion drive enclosures.
All of which can be installed in all slots on all Storage Systems, starting in slot 3. All nodes must contain at least one HBA in slot 3.
CAUTION! Attach your ESD wrist strap to an unpainted surface of the rack or enclosure.
2. With its thumbscrew on the right, carefully align and fully insert each adapter card into its slot.
CAUTION! To avoid damage to adapters, only press on the beveled edges of the adapter card to install. Do not press on the
thumbscrew, the SFPs, or the SFP receptacles. Then tighten the thumb screw to fully seat the card. When fully installed, the card will be
slightly recessed against the rear faceplate.
All node pairs must have the exact same configuration with the same adapter cards in the same slots.
At least 8 drives in the node enclosure row associated with the node pair attached to the drive enclosure.
If the drive enclosure is added to an existing and operational system, do not move drives from the node enclosure to the drive enclosure
to maintain a balanced distribution. However, ensure that each Drive-Enclosure has at least two drives, and that it has an even number
of drives.
CAUTION:
Attach your ESD wrist strap to an unpainted surface of the rack or enclosure.
1. Remove drive blanks for all drives to be installed and retain for future use.
2. For each drive, press the release button to open the drive handle.
3. Align and insert the drive until it begins to engage the drive handle.
4. Press the drive handle to seat the drive into the midplane and lock it into place.
6. After the drives have been installed the rest of the slots should be filled with drive blanks.
7. Next, toe-in the right end of the enclosure bezel, squeeze the retention latch on the other end and gently but firmly press the bezel
into place.
Install Drives 16
Data Cabling with Pre-Labeled Cables
If your Field Integrated configuration arrived with Data cables prelabeled, this video is for you, otherwise watch the video on data
cabling and labeling.
Before installing any cables, connect your anti-static wrist strap to an unpainted part of the rack or enclosure.
Your drive enclosures should be labeled on the rear inner flange. Your node enclosure will not be labeled but follows a logic as if it were.
E0 and E1 are reserved for the two node-pair positions in the 4U Node enclosure, whether it has one or two node-pairs installed. Drive
enclosures begin with E2, and continue down to the last enclosure, then continue with the first drive enclosure above and increase as
you go higher. At this time, in a two-node configuration, up to two drive enclosures are supported directly below the node enclosure. In
a four-node configuration, up to four drive enclosures are supported, two below and two above. A future release of an NVMe over Fiber
adapter card will support additional drive enclosures.
Follow the labels for your configuration. As an example, connect the cable starting with Node 0, Slot 0, Port DP-1. The cable should click
into place when it is connected.
Plug the other end into Enclosure two, slot zero, port DP-1.
Continue to the next cable until all data cabling has been completed.
Carefully route the cables to the right to provide clearance for good serviceability. Secure them with the supplied cable management
clips or Velcro.
Labeling
For Field Integrated configurations with no pre-assembly you will need to use the included label sheets for labeling the drive enclosures
and data cables.
Your node enclosure will not be labeled but follows a logic as if it were. E0 and E1 are reserved for the two node-pair positions in the 4U
Node enclosure, whether it has one or two node-pairs installed. Drive enclosures begin with E2, and continue down to the last enclosure,
then continue with the first drive enclosure above and increase as you go higher. At this time, in a two-node configuration, up to two
drive enclosures are supported directly below the node enclosure. In a four-node configuration, up to four drive enclosures are
supported, two below and two above. A future release of an NVMe over Fiber adapter card will support additional drive enclosures.
Apply the drive enclosure labels if not already applied, to the rear inner flange of the drive enclosure. In our example configuration, we
label the first drive enclosure below the node enclosure as “E2.”
Data Cable Labeling
Data cabling connects the nodes to the I/O modules in redundant paths. The HPE Alletra 9000 Cabling Tool provides the optimal
cabling for your configuration.
Run the HPE Alletra 9000 Cabling Tool. Refer to your documentation for the link to the cabling tool. You can also cache it to your
browser and run it off-line.
1. Select the HPE Alletra model.
4. Enter the exact number of total drive enclosures after installing the new drive enclosure or enclosures. Two node systems can
support 1 or 2 drive enclosures at this time. Four node systems can support 2 or 4 drive enclosures. Both node-pairs must connect
to the same number of drive enclosures.
5. Click Done.
You will be provided with the labeling for both ends of your first cable. Either search through your packet to find the two labels or
create your own labels. In our example the first label end is Node zero [N0], Slot zero [S0, Port one [DP-1] for a one-meter length cable.
The label for the other end, reads, Enclosure two [E2], Slot zero [S0], Port one [DP-1]. Notice that the cables are color coded to indicate
the controller node path, red for even number nodes and green for odd number nodes.
IMPORTANT! Both ends of a cable should have labels which indicate where both ends of the cable connect. This will allow at-a-glance
information on where a cable connects from and to.
Before installing any cables, connect your anti-static wrist strap to an unpainted part of the rack or enclosure. Now, connect the cable
starting with Node 0, Slot 0, Port DP-1. It will click into place when it is fully seated.
Carefully route the cables to the right to provide clearance for good serviceability. Secure them with the supplied cable management
clips or Velcro.
Plug the other end into Enclosure two, Slot zero, Port DP-1.
On the HPE Alletra 9000 Cabling Tool, click on the forward arrow to see the labeling for the next cable. Label your cable ends, install
the cable, and continue until all data cabling has been completed.
In keeping with the redundant architecture of the HPE Storage Systems, all network, hosts, and even power provide redundant paths to
enhance reliability and stability.
Network cabling
First, install a cable organizer to hold the cables to the side, providing better access to the storage system components. This improves
speed and efficiency for future service if and when needed. Choose and connect together the appropriate cable organizer stem and
loom. Ideally, install a stem that can connect directly to a rack post pointing to the rear and a loom for the cables, directing the cables to
the side of the rack AWAY from the components. In our installation the stem connects directly to a power strip. Add the cables to the
loom. Adjust the length as needed.
If necessary, use Velcro ties to wrap curled up cable lengths.
CAUTION! Review documentation for recommended minimum diameters for curled up cables used.
The cable organizer can also be used by a variety of cables needed for all the supported host protocols. Insert your network Ethernet
cables into the management, (MGMT), ports of each node. Attach the ethernet cables to the cable organizer.
NOTE:
The other end of your network cable connects to a network switch.
Host cabling
Fiber Channel, iSCSI over 10 and 25 gigabit Ethernet, and 10 G base T are supported by your Storage System, through installed
Adapter Cards. HPE recommends installing host cables to the same port in each of the adapter cards in each node in a node-pair. If
more than the one connection per node is made to a host, and if more than one adapter card of the same type is installed, distribute the
connections between the adapter cards. This provides I/O load balancing.
For host connectivity through a Fiber Channel Switch, you must set up Fiber Channel fabric zoning to restrict WorldWide Names
(WWNs) seen by the system.
If not already installed, insert your SFPs into your Fiber Channel or iSCSI optical adapters. Be gentle as you insert them into your
adapters. Then remove the SFP dust covers and the Fiber channel dust covers, using care not to touch the ends before gently inserting
your Fiber channel or iSCSI optical cables into both adapters in both nodes of a node pair.
CAUTION! Use care not to bend and cause damage to your optical cables.
For 10G base T connections, directly plug the copper CAT-6 or CAT-7 cables into the open RJ-45 connectors on the adapter.
Remember Node Zero and Node One, and Node Two and Node Three, must be configured the same.
For best practices, especially in a complex installation, label with both destinations; node, slot, and port, as well as host or switch and
port.
Add the optical or copper cables to the Cable Organizer Loom. Note: The other end of your host cables connect to the appropriate host
or fabric switch.
CAUTION! Use care not to touch and contaminate the ends of the Fiber Channel cables.
Power should be provided by two or more redundant power sources going to separate Power Distribution Units, PDUs, on each side of
the rack. A common convention is to use black power cables to connect to the Power Cooling Modules, PCMs and Power Cooling Battery
Modules, PCBMs on the left side, and gray power cables for the PCMs and PCBMs on the right side.
WARNING! If your configuration includes DC PCBMs, a licensed electrician is required to wire them into your PDUs. This is necessary to
avoid bodily injury or possible death.
If this is the only system in the rack, confirm that all PDUs are set to the off position before connecting the power cords to your PCBMs.
There are no on/off power switches on the PCBMs.
If there are other systems in the rack, and you cannot set the PDUs to the off position, realize that connection your power cords will
power up the system. You may need to delay this step until you are ready to power up your system.
Connect the black power cords from left side to the PCBMs on the left. The power cables use an automatic locking mechanism and have
dark red release buttons on the sides of the plug. Insert them fully and gently pull to confirm that the power cords are connected and
locked in place.
Connect the gray power cords to the right side’s PCBMs. Insert them fully and gently pull to confirm that the power cords are
connected and locked in place.
Your storage system is now cabled and ready to be powered up.
Power Cabling 20
Power Up and Initialization
Prior to Power up and Initialization, you must enable cloud management of your storage system. Refer to your documentation for the
latest link for HPE Alletra 9000: Cloud Enablement Quick Start and follow the steps to register an account, activate services and enable
cloud management.
Initializing your HPE Storage System will check your system’s hardware readiness, power redundancies, optimal data cabling, and
health of your system components. You will be prompted along the way for information about your Storage System’s name, serial
number, IP address, Proxy server, HPE InfoSight registration, and support contact addresses. With this information, initialization will
then configure network connections and install the User Interface (UI). The information that you need to provide is detailed in your
Initialization Worksheet. So, have it handy. Your Storage System’s serial number can be found on the pullout-tab on the front left side of
your node enclosure.
You can choose to initialize your HPE Storage System through a computer running Microsoft Windows, Apple macOS, or Linux. For
Microsoft Windows, download and install the HPE Discovery Tool from HPE Software Depot. For Apple macOS or Linux you won’t need
to use the HPE Discovery Tool. Whichever system you choose, it must be connected to the same network subnet as your Storage
System.
All nodes should be connected via their management, MGMT ports to a network with an active DNS.
HPE and authorized HPE Partners can connect to the Service ETH port to initialize the Storage System and later access the UI without
having to run over a customer’s network.
All nodes should be connected via their management, MGMT ports to a network with an active DNS.
Let’s begin. Power on both sets of PDUs which will power on the Storage System.
Allow at least 10 Minutes for completion of the Storage System to boot and go through its self-test routines and become discoverable
over the network. Allow 15 minutes for this process on a network without DHCP running. Verify that the LED status is solid green on
the Power Cooling Battery Modules, PCBMs, Drives, and Node Enclosure.
If you are using the HPE Discovery Tool, launch it and when prompted enter your storage system’s serial number and click the search
button. Remember, the serial number is located on the pullout-tab on the front left side of your node enclosure, specifically in the
middle rear of the tab. This tool will locate the storage system on the network and provide you with a temporary IP address. Click the
Launch button to open the default browser. Or open a browser and enter the IP address.
If you are using an Apple macOS or Linux system, port 5353 needs to be open for multicast Domain Name Server, mDNS, then open the
browser and browse to
[Link]
You will likely receive a warning from your web browser about security certificates. Click advanced, followed by "Accept the Risk and
Continue”. Different browsers have their own set of procedures. Your installation documentation details the appropriate procedures for
your browser.
Once connected, you will be prompted to read through and accept the HPE End User License agreement, EULA. When prompted, click
on the Let’s get started button. A system hardware check begins.
If a failure is found with a component or with cabling you will be prompted to fix the issue.
Click Continue.
When prompted provide:
System Name
NOTE: This will be the default admin account, but you will be able to add others later.
Network configuration, including IP address, optional DNS address and proxy settings.
The system will apply the network configuration, create a shared volume, several other tasks, and a health check which can take more
than several minutes. Be patient. If less than 24 drives are installed for each controller node-pair, initialization may extend longer that
20 minutes. When prompted, click Continue. The storage system will restart with the newly configured IP address, opening a new tab in
your browser with the new address.
You are then prompted for:
HPE InfoSight registration
When a hardware component has failed and needs replacement, you are alerted and sent the part number and a streamlined
ordering process.
When a software patch or update is available, it can be set for automated download, ready for installation.
Enter your HPE Passport account information. If you haven’t already, sign up on the InfoSight login page. There’s no charge.
As a new user to InfoSight, you will be prompted to set up a system group and confirm your email address. As a best practice, you
should make sure that your system group has at least two admin users.
If your company already has one or several System Groups then you can use the Passport account to look up the System Group and
register the HPE Storage system during the initial setup.
From now on, you can enter the username and password that you assigned during initialization.
Adding hosts and volumes for your new Storage System is done from the User Interface, UI.
Login to the User Interface.
You will have a choice to view two Tours, an Overview and a Storage Tour. Click on the Storage Tour.
The Tour will walk you through the process of adding a Host Set, through which Hosts are managed. From a Host Set you can configure,
export, or remove a given Host.
The User Interface (UI) allows you to monitor, maintain, update, upgrade and repair your HPE Alletra 9000 Storage System. From the
UI, you can also update your software.
The UI resides internally on your Storage System. To log in, point your browser to the IP address of your Storage System and when
prompted, enter the username and password. Click on the <Log In> button to continue.
There are two Tours that are always available from the UI. The Introduction Tour walks you through the interface and all the functions.
The Storage Tour walks you through the process of adding hosts and volumes and managing both. The Tours are the place to learn
about the UI.
The UI Dashboard is the default first page. It provides at-a-glance monitoring of your Storage System’s Health, Capacity, and
Performance. Upon initial login, no performance information is displayed, because you haven’t created any storage volumes.
In addition, notice that you’re prompted along the top to provide information to:
Optimize your support experience
Configure your email server for notification from your Storage System, and finally.
Electrical safety
Follow basic electrical safety precautions to protect yourself from harm and to protect the storage array from damage:
Locate the power switch on the array and the emergency power switch of the room so that you can quickly stop power to
the system if an electrical issue occurs.
When working with high-voltage components or exposed electrical circuits, have another person who is familiar with
emergency power-off locations nearby to switch off the power, if necessary.
Use an approved power cord with a grounded plug for the power supply and plug the power cord into a grounded
electrical outlet.
When connecting power to power supplies, install the power supply before connecting the power cable to it. When
disconnecting power to power supplies, unplug the power cable before removing the power supply.
Unplug the power cord before removing a power supply from the chassis.
In the rare event that you must shut down an array for maintenance, such as chassis replacement, power down the
system and unplug the power cords from all power supplies.
System safety
Electrostatic discharge (ESD) can damage system components. To avoid damage to equipment from electrostatic
discharge:
Prepare an ESD work surface by placing an antistatic mat on the floor or on a table near the storage system. Attach the
ground lead of the mat to an unpainted surface of the rack.
Always use an ESD wrist strap when touching system components. Attach the grounding strap clip directly to an
unpainted surface of the rack.
Keep each component in its antistatic package until you are ready to install it.
Avoid contact between electronic components and clothing. Even if you are wearing a wrist strap, your clothing may still
retain a charge.
Precautions 25
Tools and Materials
Ensure that you have the tools you need before you begin. These should include:
P1 Phillips-head screwdriver to remove node boot drive
Front View 27
Physical Disk
Physical Disk 28
Physical Disk 29
Rear View
Rear View 30
Power Cooling Battery Module (PCBM)
Adapter Card 33
Adapter Card 34
SFP
SFP 35
Controller Node
Controller Node 36
Controller Node 37
Power Cooling Module (PCM)
I/O Module 39
Data Cable
Data Cable 40
Internal View
Internal View 41
Node Boot Drive
Node DIMMs 44
User Interface Tour
The User Interface (UI) allows you to monitor, maintain, update, upgrade and repair your HPE Alletra 9000 Storage System. From the
UI, you can also update your software.
The UI resides internally on your Storage System. To log in, point your browser to the IP address of your Storage System and when
prompted, enter the username and password. Click on the <Log In> button to continue.
There are two Tours that are always available from the UI. The Introduction Tour walks you through the interface and all the functions.
The Storage Tour walks you through the process of adding hosts and volumes and managing both. The Tours are the place to learn
about the UI.
The UI Dashboard is the default first page. It provides at-a-glance monitoring of your Storage System’s Health, Capacity, and
Performance. Upon initial login, no performance information is displayed, because you haven’t created any storage volumes.
In addition, notice that you’re prompted along the top to provide information to:
Configure your email server for notification from your Storage System, and finally.
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Adapter Card repair requires shutting down the node that it’s installed in. But the controller node itself does not
need to be removed, nor does the rest of your HPE Storage System need to be shut down.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
Unpack the replacement adapter card onto an anti-static surface close by, ready to install, but leaving it in its static dissipative bag.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port. Be sure to attach to a node
other than the one being serviced.
From the UI click on:
1. System
Select the Adapter card: As an example, FC adapter card in slot 4, (the actual node and adapter card chosen is specific to your
environment)
4. Scroll down.
You can select to watch adapter card repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You can
also select to read and print out the adapter card repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
7. Read the Warning to let you know that the node will need to be shut down for the removal and replacement of the Adapter card.
Notice the message, “No single path hosts found”. This tells us that no host is connected to this node and only this node. If that
were the case, data access would be interrupted when the node was shutdown.
9. Click Continue.
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications. While waiting for the node to shut down you can click on View Service details in the
lower right of the gray banner. It displays instructions, location information and links to the video and written repair information. When
the node shuts down, the task progress bar pauses, displays, “Service the adapter card now – click on restart node once service is
complete” and waits for your input to re-start the node. This is your cue to physically remove and replace the Adapter Card.
Software Pre-Procedure 46
Hardware Procedure
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Adapter Card to be replaced. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the Adapter Card are labeled. For best practices, especially in a complex installation, label
with both destinations; node, slot, and port, as well as host or switch and port.
NOTE: For an Adapter Card that contains SFPs, remove each SFP with the fiber cables attached by pulling on the
long release tab underneath the SFP.
5. Fully loosen the single captive thumb screw that secures the Adapter Card to the node. If too tight, use a Torx T10 screwdriver to
loosen the thumb screw.
6. While supporting the Adapter Card from below, pull the captive thumb screw to unseat the Adapter Card from its backplane in the
node, and carefully slide the Adapter Card out of the its slot in the node and place on the antistatic surface.
7. Compare the replacement Adapter Card with the failed Adapter Card to confirm they match.
8. Carefully align and slide the replacement Adapter Card into its node slot and press firmly to fully seat it.
CAUTION! Always press on the edges of the Adapter card bezel to seat the card. Do not press on the thumbscrew, the SFPs, or the
SFP receptacles. Notice that the card bezel is slightly recessed into its slot.
NOTE: You may need to press very firmly to fully seat some adapter cards.
10. Replace all the cables, and if applicable, SFPs, that were removed earlier, making sure that each cable is in the same location and
fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 47
Software Post-Procedure
Software Post-Procedure 48
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Physical Drive repair can be done without shutting down your entire HPE Storage System.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
From the UI, notice the yellow alert icon next to System. Click on it, then click on Drives. Notice the drive is listed in a “Failed” state.
A drive in a degraded state is not ready to be removed. A failing drive will present as degraded while it attempts to transfer data off
itself. When that transfer is complete it will then present as failed. Only remove a drive in a failed state.
CAUTION! Once the failed drive is removed, you will have up to 10 minutes to install the replacement drive to avoid overheating your
Storage System. If you need more than 10 minutes, install a drive blank during this period.
With that in mind, unpack the replacement drive onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the Primera UI click on:
1. System
NOTE: The UI pre-populates the dropdown with the failed enclosure and drive as the top option, noted with the
yellow triangle.
Select drive number: As an example, 1 (the actual enclosure and drive number chosen is specific to your environment)
4. Scroll down.
The drive’s location, current info and status are displayed. You can select to watch Physical drive repair video, which is the same as
what you’ll see in part 2 of this 3-part procedure. You can also select to read and print out the Physical drive repair instructions.
6. Click Continue.
7. If necessary, status is displayed during the process of moving data off the drive. In our example, it has already been moved.
8. Once data has been moved off and the drive is safe to replace, the UI will display, “Please replace drive at location x:y:z with a new
drive” This is your cue to remove and replace the Physical drive.
Software Pre-Procedure 49
Hardware Procedure
1. Remove the bezel by squeezing the release on the left side and pivoting the bezel off the front of the enclosure.
2. Locate the drive with the blue UID and Amber Alert LEDs lit.
CAUTION! Do not remove the Drive until the blue UID location LED on the Drive turns solid, indicating that it is safe to remove the
Drive.
CAUTION! To avoid overheating your storage system, you have ten minutes to replace the Drive once the failed Drive has been
removed. If 10 minutes isn’t enough time, install a drive blank until you are ready to install the replacement drive.
3. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
4. Press the release tab. The Drive handle will pop out.
5. Press the release tab. The Drive handle will pop out.
7. Carefully align and slide the Drive into its slot until it is fully seated. Close the Drive handle until it locks into position.
8. Confirm that the Drive is fully seated with its front flush with the other Drives.
The green status LED should blink or be on solid, as it gets integrated into the storage system.
NOTE: If data was removed earlier from the failing drive, it will be moved back onto the replacement drive.
Once normal drive activity starts the green status LED will blink again.
9. Replace the bezel by toeing in the right end and snapping in the left side of the bezel to secure it to the front of the enclosure.
Hardware Procedure 50
Software Post-Procedure
Returning to the UI, notice the status of the UI Task bar. Throughout this process, the System Task progress bar shows the percentage
complete. When done, a green banner alert appears to let you know that the repair was successful and provides you with an
opportunity to review the details of the completed task.
Software Post-Procedure 51
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Power Cooling Battery Module, PCBM repair can be done without shutting down your entire HPE Storage
System.
CAUTION! Removing a PCBM reduces power and cooling redundancy, so it is recommended to schedule this procedure during lower
activity times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
WARNING! To avoid severe injury or possible death do not attempt replacement of a DC PCBM without the assistance of a qualified
person supplied by the customer. Refer to the documentation for replacement procedures for a DC PCBM.
CAUTION! Once the failed PCBM is removed, you will have up to 5 minutes to install the replacement PCBM to avoid overheating your
Storage System.
With that in mind, unpack the replacement Power Cooling Battery Module, PCBM, onto an anti-static surface close by, ready to install.
HPE and other authorized service person can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
Select which PCBM: As an example, Power Supply 1 (the actual Node and PCBM chosen is specific to your environment)
4. Scroll down.
You can select to watch PCBM repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You can also
select to read and print out the PCBM repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
8. Click Continue.
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications. You can click on View Service details in the lower right of the gray banner. It
displays instructions, location information and links to the video and written repair information. Notice that the repair progress bar
says, “Service the PCBM now.” This is your cue to physically remove the Power Cooling Battery Module.
Software Pre-Procedure 52
Hardware Procedure
CAUTION! To avoid overheating your storage system, you have five minutes to replace the Power Cooling Battery Module, PCBM, once
the failed PCBM has been removed.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the PCBM to be replaced. This indicates that it is safe to remove.
3. Slide back the red tabs on the side of the power cord connector to unlock and disconnect the power cord from the PCBM.
NOTE: Even though the power cord has been disconnected there may still be enough power from the enclosure
lighting the PCBM LEDs after power is disconnected. This is not a problem.
4. Using your thumb and forefinger squeeze the release tab and handle to release the PCBM from its enclosure bay.
5. While supporting the failed PCBM from underneath, carefully slide it out.
6. While supporting the replacement PCBM from underneath, oriented with the release tab at the bottom, carefully align and slide the
PCBM into its enclosure bay.
7. Make sure that the PCBM is fully seated, with the release tab engaged.
8. Replace the power cord and give it a small tug to confirm that it is locked into place.
Hardware Procedure 53
Software Post-Procedure
Returning to the UI, notice the status of the UI Task bar. In cases where the UI Task bar does not progress after the component has
been replaced, click on “Complete service” to complete the repair and run CheckHealth. Confirm that the green Status LED on the
Power Cooling Battery Module is lit solid green.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 54
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Controller node replacement requires shutting down the node only and not your entire HPE Storage System.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
CAUTION! Once the failed controller node is removed, you will have up to 30 minutes to install the replacement controller node to avoid
overheating of your Storage System.
With that in mind, unpack the replacement controller node onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port. Be sure to attach to a node
other than the one being serviced.
From the UI click on:
1. System
Select which node: As an example, 3 (the actual node chosen is specific to your environment)
4. Scroll down. You can select to watch node repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You
can also select to read and print out the node repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
7. Read the Warning to let you know that the node will need to be shut down for the removal and replacement.
Notice the message, “No single path hosts found”. This tells us that no host is connected to this node and only this node. If that
were the case, data access would be interrupted when the node was shutdown.
9. Click Continue.
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications.
While waiting for the node to shut down you can click on View Service details in the lower right of the gray banner. It displays
instructions, location information and links to the video and written repair information. When the node has shutdown, you are
instructed to “Service the node now”. This is your cue to physically remove the Controller Node.
Software Pre-Procedure 55
Hardware Procedure
CAUTION! To avoid overheating your storage system, you have 30 minutes to replace the Controller Node once the failed node has
been removed from its bay.
NOTE: Your node model may look different than the one shown here but the procedures are the same.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Node to be removed. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the node are labeled, noting their locations. For best practices, especially in a complex
installation, label with both destinations; node, slot, and port, as well as host or switch and port.
4. Remove the Ethernet cable from the MGMT port by pressing the release tab and disconnecting the cable.
5. Remove each SFP with the fiber cables attached by gently pulling out the long release tab underneath the SFP.
6. Remove any additional cables noting their location for later replacement.
NOTE: It is not necessary to remove any SFPs that do not have cables attached.
7. Fully loosen both captive thumb screws that secure the Controller node handles to its enclosure bay. If too tight use a Torx T15
screwdriver to loosen.
8. Open both controller node handles simultaneously to disengage the node from the enclosure backplane and partially slide the Node
out if its bay.
9. While making sure that all cables are out of the way, support the node from underneath and carefully slide it out of its bay in the
node enclosure.
10. Place the removed node on to the flat anti-static surface next to the replacement node.
11. Carefully transfer all Adapter Cards and slot filler blanks, if any, from the failed node to the replacement node, making sure that the
adapter cards are transferred to the same slot they occupied in the failed node.
12. For each adapter card, fully loosen the captive thumbscrew and carefully slide the card out of the failed node. If too tight use a Torx
T10 screwdriver to loosen.
13. Carefully align and slide the adapter card into the appropriate replacement node slot and press very firmly to fully seat it into the
node.
CAUTION! Always press on the edges of the Adapter card bezel to seat the card. Do not press on the thumbscrew, the SFPs, or the
SFP receptacles. Notice that the card bezel is slightly recessed into its slot.
NOTE: You may need to press very firmly to fully seat some adapter cards.
CAUTION! Clear the area in front of the empty bay of any cables that might get snagged or damaged during re-installation of the
controller node.
15. With its release levers in their open position, carefully align and slide the node into its bay until its release levers begin to engage.
16. Close the release levers to fully seat the node into the enclosure backplane.
17. Fully tighten both thumbscrews to secure the node to the enclosure.
18. Replace all the cables that were removed earlier, making sure that each cable is in the same location and fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 56
Software Post-Procedure
As soon as the node is re-inserted, it restarts automatically, and node rescue is run. Click on activities and tasks to monitor the Node
Rescue process. Once it has re-integrated into the storage system, maintenance mode ends, and checkhealth is run to confirm that the
replaced Node is healthy. The process can take up to 20 minutes. Confirm that the green Status LEDs on the controller node flash in
synchronization with the other controller node or nodes, indicating that it has joined the cluster.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 57
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Node DIMM repair requires shutting down and removing the node in which it’s installed. There is no need to shut
down your entire HPE Storage System.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
CAUTION! Once the controller node is removed, you will have up to 30 minutes to re-install the controller node to avoid overheating of
your Storage System.
With that in mind, unpack the replacement Node DIMM onto an anti-static surface close by, ready to install, but leaving it in its static
dissipative bag.
HPE and other authorized service personnel can bypass the customer network by accessing the UI through the Service Eth port. Be sure
to attach to the node not being temporarily removed.
From the UI click on:
1. System
Select node: As an example, 3 (the actual node chosen is specific to your environment)
4. Scroll down.
You can select to watch node DIMM repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You can
also select to read and print out the node DIMM repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
6. Check the location of the Controller Node to be removed and the location of the Node DIMM to be replaced.
7. Read the Warning to let you know that the node will need to be shut down for the removal and replacement. Notice the message,
“No single path hosts found”. This tells us that no host is connected to this node and only this node. If that were the case, data
access would be interrupted when the node was shutdown.
9. Click Continue.
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications. While waiting for the node to shut down you can click on View Service details in the
lower right of the gray banner. It displays instructions, location information and links to the video and written repair information. When
the node has shutdown, you are instructed to “Service the node now.” This is your cue to physically remove the Controller Node.
Software Pre-Procedure 58
Hardware Procedure
CAUTION! To avoid overheating your storage system, you have 30 minutes to replace the repaired Controller Node once the controller
node has been removed from its bay.
NOTE: Your node model may look different than the one shown here but the procedures are the same.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Node to be removed. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the node are labeled, noting their locations. For best practices, especially in a complex
installation, label with both destinations; node, slot, and port, as well as host or switch and port.
4. Remove the ethernet cable from the MGMT port by pressing the release tab and disconnecting the cable.
5. Remove each SFP with the fiber cables attached by gently pulling out the long release tab underneath the SFP.
6. Remove any additional cables noting their location for later replacement.
7. Fully loosen both captive thumb screws that secure the Controller node handles to its enclosure bay. If too tight use a Torx T-10
screwdriver to loosen.
8. Open both controller node handles simultaneously to disengage the node from the enclosure backplane and partially slide the Node
out if its bay.
9. While making sure that all cables are out of the way, support the node from underneath and carefully slide it out of its bay in the
node enclosure.
10. Place the removed node onto the flat anti-static surface nearby.
11. Remove the top cover by pressing the black plastic release tabs on both sides of the top cover near the rear, slide it back until it
stops, and lift the top cover up and off the node.
NOTE: On the inside cover of the node you will find a map that displays the locations of the Boot drives, DIMMs and
Node Coin Battery.
12. Locate the DIMM to be replaced, using the location specified in the alert email and the top cover map.
NOTE: If necessary, rotate up a small air baffle to access release tabs that help hold some of the DIMMS in place.
WARNING! To avoid injury, use care not to touch the nearby heat sinks which may be hot.
13. Carefully push down on the release tabs on either side of the DIMM to release it from its slot.
14. Holding the DIMM by its edges lift it out and place it into a static dissipative bag.
15. Holding the replacement DIMM by its edges, carefully remove it from its static dissipative bag.
16. Align the notch in the DIMM contacts with the key in the DIMM slot.
17. Gently, but firmly insert the DIMM until its release tabs lock it into place.
18. Confirm that the DIMM is fully seated and its release tabs are firmly in place.
19. If necessary, rotate the small air baffle down and back into place.
20. Replace the Top Cover by aligning the tabs in the front of the cover with the notches in the front of the chassis, lower the cover and
slide it forward to secure.
CAUTION! Clear the area in front of the empty bay of any cables that might get snagged or damaged during re-installation of the
controller node.
21. With its release levers in their open position, carefully align and slide the node into its bay until its release levers begin to engage.
22. Close the release levers to fully seat the node into the enclosure backplane.
23. Fully tighten both thumbscrews to secure the node to the enclosure.
Hardware Procedure 59
24. Replace all the cables that were removed earlier, making sure that each cable is in the same location and fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 60
Software Post-Procedure
As soon as the node is re-inserted, it restarts automatically. Once it has re-integrated into the storage system, maintenance mode ends,
and checkhealth is run to confirm that the replaced Node is healthy. The process takes approximately 10 minutes. Confirm that the
green Status LEDs on the controller node flash in synchronization with the other controller node or nodes, indicating that it has joined
the cluster.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 61
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Node Coin Battery repair requires shutting down and removing the node in which it’s installed. There is no need
to shut down your entire HPE Storage System.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
CAUTION! Once the controller node is removed, you will have up to 30 minutes to re-install the controller node to avoid overheating of
your Storage System.
With that in mind, unpack the replacement Node Coin Battery onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port. Be sure to attach to a node
other than the one being serviced.
From the UI click on:
1. System
Select node: As an example, 3 (the actual node chosen is specific to your environment)
4. Scroll down. You can select to watch node coin battery repair video, which is the same as what you’ll see in part 2 of this 3-part
procedure. You can also select to read and print out the node coin battery repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
6. Check the location of the Controller Node which contains the node coin battery to be replaced.
7. Read the Warning to let you know that the node will need to be shut down for the removal and replacement.
8. Notice the message, “No single path hosts found”. This tells us that no host is connected to this node and only this node. If that
were the case, data access would be interrupted when the node was shutdown.
Software Pre-Procedure 62
Hardware Procedure
CAUTION! To avoid overheating your storage system, you have 30 minutes to replace the repaired Controller Node once the controller
node has been removed from its bay.
NOTE: Your node model may look different than the one shown here but the procedures are the same.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Node to be removed. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the node are labeled, noting their locations. For best practices, especially in a complex
installation, label with both destinations; node, slot, and port, as well as host or switch and port.
4. Remove the ethernet cable from the MGMT port by pressing the release tab and disconnecting the cable.
5. Remove each SFP with the fiber cables attached by gently pulling out the long release tab underneath the SFP.
6. Remove any additional cables noting their location for later replacement.
7. Fully loosen both captive thumb screws that secure the Controller node handles to its enclosure bay. If too tight use a Torx T-15
screwdriver to loosen.
8. Open both controller node handles simultaneously to disengage the node from the enclosure backplane and partially slide the Node
out if its bay.
9. While making sure that all cables are out of the way, support the node from underneath and carefully slide it out of its bay in the
node enclosure.
10. Place the removed node onto the flat anti-static surface nearby.
11. Remove the top cover by pressing the black plastic release tabs on both sides of the top cover near the rear, slide it back until it
stops, and lift the top cover up and off the node.
NOTE: On the inside cover of the node you will find a map that displays the locations of the Boot drives, DIMMs and
Node Coin Battery.
Pinch the top of the battery, pivot it forward and pull it out.
With its positive (+) side facing away from the bracket insert the battery at an angle and then pivot it up perpendicular to the
board until it snaps into place.
13. Replace the Top Cover by aligning the tabs in the front of the cover with the notches in the front of the chassis, lower the cover and
slide it forward to secure.
CAUTION! Clear the area in front of the empty bay of any cables that might get snagged or damaged during re-installation of the
controller node.
14. With its release levers in their open position, carefully align and slide the node into its bay until its release levers begin to engage.
15. Close the release levers to fully seat the node into the enclosure backplane.
16. Fully tighten both thumbscrews to secure the node to the enclosure.
17. Replace all the cables that were removed earlier, making sure that each cable is in the same location and fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 63
Software Post-Procedure
As soon as the node is re-inserted, it restarts automatically. Once it has re-integrated into the storage system, maintenance mode ends,
and checkhealth is run to confirm that the replaced Node is healthy. The process takes approximately 10 minutes. Confirm that the
green Status LEDs on the controller node flash in synchronization with the other controller node or nodes, indicating that it has joined
the cluster.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 64
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Node Boot Drive repair requires shutting down and removing the node in which it’s installed. There is no need to
shut down your entire HPE Storage System.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
CAUTION! Once the controller node is removed, you will have up to 30 minutes to re-install the controller node to avoid overheating of
your Storage System.
With that in mind, unpack the replacement Node Boot Drive onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port. Be sure to attach to a node
other than the one being serviced.
From the UI click on:
1. System
Select which Node boot drive: As an example, 1 (the actual node and node boot drive chosen is specific to your environment)
4. Scroll down. You can select to watch node boot drive repair video, which is the same as what you’ll see in part 2 of this 3-part
procedure. You can also select to read and print out the node boot drive repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
6. Check the location of the Controller Node which contains the Node Boot Drive to be replaced.
7. Read the Warning to let you know that the node will need to be shut down for the removal and replacement.
8. Notice the message, “No single path hosts found”. This tells us that no host is connected to this node and only this node. If that
were the case, data access would be interrupted when the node was shutdown.
Software Pre-Procedure 65
Hardware Procedure
CAUTION! To avoid overheating your storage system, you have 30 minutes to replace the repaired Controller Node once the controller
node has been removed from its bay.
NOTE: Your node model may look different than the one shown here but the procedures are the same.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Node to be removed. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the node are labeled, noting their locations. For best practices, especially in a complex
installation, label with both destinations; node, slot, and port, as well as host or switch and port.
4. Remove the ethernet cable from the MGMT port by pressing the release tab and disconnecting the cable.
5. Remove each SFP with the fiber cables attached by gently pulling out the long release tab underneath the SFP.
6. Remove any additional cables noting their location for later replacement.
7. Fully loosen both captive thumb screws that secure the Controller node handles to its enclosure bay. If too tight use a Torx T-15
screwdriver to loosen.
8. Open both controller node handles simultaneously to disengage the node from the enclosure backplane and partially slide the Node
out if its bay.
9. While making sure that all cables are out of the way, support the node from underneath and carefully slide it out of its bay in the
node enclosure.
10. Place the removed node onto the flat anti-static surface nearby.
11. Remove the top cover by pressing the black plastic release tabs on both sides of the top cover near the rear, slide it back until it
stops, and lift the top cover up and off the node.
NOTE: On the inside cover of the node you will find a map that displays the locations of the Boot drives, DIMMs and
Node Coin Battery.
12. An installed adapter card may need to be removed to access the Node Boot Drive. There are two the Boot Drives in every node. The
boot drive to be replace may be in the Zero or One position.
13. Remove any adapter card blocking access to the Node Boot Drive by fully loosening the single captive thumb screw that secures the
adapter card, using a Torx T-10 screwdriver if it is too tight, and then carefully pull on the captive thumb screw to unseat and
remove the adapter card.
14. Loosen and carefully remove the P1 Phillips head screw that secures the M2 NVMe Boot Drive to the node board. Use care, and if
available, a magnetized screwdriver, to avoid dropping the screw.
15. Lift the Boot Drive just enough to be able to grasp it, and then pull it out of its M2 slot and place into a static dissipative bag.
NOTE: If a longer boot drive is to be installed, the shorter standoff should be removed to avoid an electrical short.
16. Carefully remove the replacement Boot Drive from its static dissipative bag.
17. Align the notch in the board with the key in the M2 slot.
18. Gently, but firmly insert the Boot Drive into the M2 slot and lower over the securing hole, which should align. If it doesn’t align,
remove and reinsert the Boot Drive.
19. Replace the P1 Philips head screw to secure the replacement Node Boot Drive.
Carefully align and slide in the Adapter Card and press firmly to fully seat it.
CAUTION! Always press on the edges of the Adapter card bezel to seat the card. Do not press on the thumbscrew, the SFPs, or
the SFP receptacles. Notice that the card bezel is slightly recessed into its bay.
Hardware Procedure 66
NOTE: You may need to press very firmly to fully seat some adapter cards.
21. Tighten the captive thumb screw to secure the adapter card.
CAUTION! Clear the area in front of the empty bay of any cables that might get snagged or damaged during re-installation of the
controller node.
22. With its release levers in their open position, carefully align and slide the node into its bay until its release levers begin to engage.
23. Close the release levers to fully seat the node into the enclosure backplane.
24. Fully tighten both thumbscrews to secure the node to the enclosure.
25. Replace all the cables that were removed earlier, making sure that each cable is in the same location and fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 67
Software Post-Procedure
As soon as the node is re-inserted, it restarts automatically. Once it has re-integrated into the storage system, maintenance mode ends,
and checkhealth is run to confirm that the replaced Node is healthy. The process takes approximately 10 minutes. Confirm that the
green Status LEDs on the controller node flash in synchronization with the other controller node or nodes, indicating that it has joined
the cluster.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 68
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: An SFP can be replaced without shutting a node or your Storage System.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
Select Port: As an example, Port 1 (The actual node, card or non-card slot, and port chosen is specific to your environment, and
will be specified in the alert message that you receive.)
You can select to watch SFP repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You can also
select to read and print out the SFP repair instructions.
4. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
7. Click Continue.
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications. You can click on View Service details in the lower right of the gray banner. It
displays instructions, location information and links to the video and written repair information. Notice you are instructed to service the
SFP. This is your cue to physically remove and replace the SFP.
Software Pre-Procedure 69
Hardware Procedure
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Notice the UID locate LED is solid blue on the adapter card with the SFP to be replaced. This confirms the location.
3. If the SFP being replaced is in an Adapter card, confirm that the fiber cables attached to the SFP are labeled. For best practices,
especially in a complex installation, label with both destinations; node, slot, and port, as well as host or switch and port.
4. Remove the fiber cables by first removing the SFP by gently pressing in on the SFP while gently pulling out the long release tab
underneath the SFP with the fiber cables attached.
5. Then remove the fiber cables by pressing down on the release tab above the cables and disconnecting them from the SFP.
CAUTION! Use care not to touch and contaminate the ends of the Fiber Channel cables.
6. Remove the dust cover from the fiber end of the replacement SFP and Insert the fiber cables until they are fully seated.
CAUTION! Use care not to touch and contaminate the ends of the Fiber Channel cables.
Hardware Procedure 70
Software Post-Procedure
Returning to the UI, click on “Complete service” to complete the repair and run CheckHealth.
When done, a green banner alert appears to let you know that the repair was successful and provides you with an opportunity to
review the details of the completed task.
Software Post-Procedure 71
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Power Cooling Module, PCM repair can be done without shutting down your entire HPE Storage System.
WARNING! To avoid severe injury or possible death do not attempt replacement of a DC PCM without the assistance of a qualified
person supplied by the customer. Refer to the documentation for replacement procedures for a DC PCM.
CAUTION: Once the failed PCM is removed, you will have up to 5 minutes to install the replacement PCM to avoid overheating your
Storage System.
With that in mind, unpack the replacement Power Cooling Module, PCM, and place it onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
d. Select which PCM: As an example, Drive Enclosure 2 Power Supply 1 (the actual PCM chosen is specific to your environment)
4. Scroll down.
You can select to watch the PCM repair video, which is the same as what you’ll see in part 2 of this 3-part procedure. You can also
select to read and print out the PCM repair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
8. Click Continue
The System Detail page appears and displays the repair progress. A yellow alert box confirms that the system is in maintenance mode,
still logging alerts, but not sending out notifications. You can click on View Service details in the lower right of the gray banner. It
displays instructions, location information and links to the video and written repair information. Notice that the repair progress bar
says, “Service the PCM now.” This is your cue to physically remove the Power Cooling Battery Module.
Software Pre-Procedure 72
Hardware Procedure
CAUTION:To avoid overheating your storage system, you have five minutes to replace the Power Cooling Module, PCM, once the failed
PCM has been removed.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the PCM to be replaced. This indicates that it is safe to remove.
3. Slide back the red tabs on the side of the power cord connector to unlock and disconnect the power cord from the PCM.
NOTE: Even though the power cord has been disconnected there may still be enough power from the enclosure
lighting the PCM LEDs after power is disconnected. This is not a problem.
4. Using your thumb and forefinger squeeze the release tab and handle to release the PCM from its enclosure bay.
5. While supporting the failed PCM from underneath, carefully slide it out.
6. While supporting the replacement PCM from underneath, oriented with the release tab at the bottom, carefully align and slide the
PCM into its enclosure bay.
7. Make sure that the PCM is fully seated, with the release tab engaged.
8. Replace the power cord and give it a small tug to confirm that it is locked into place.
Hardware Procedure 73
Software Post-Procedure
Returning to the UI, notice the status of the UI Task bar. If prompted, click on Complete Service. The service process includes an
automatic health check. When done, a green banner alert appears to let you know that the repair was successful and provides you with
an opportunity to review the details of the completed task.
Verify that the status LED on the PCM is green.
Software Post-Procedure 74
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: I/O Module repair can be done without shutting down your entire HPE Storage System.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering.
CAUTION: Once the failed I/O Module is removed, you will have up to 10 minutes to install the replacement I/O Module to avoid
overheating your Storage System.
With that in mind, unpack the replacement I/O Module, and place it onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
e. Select which I/O Module: As an example, Drive Enclosure 2, I/O Module 1 (the actual enclosure and I/O module chosen is specific
to your environment.)
3. Scroll down.
a. You can select to watch the I/O module repair video, which is the same as what you’ll see in part 2 of this 3-part procedure.
b. You can also select to read and print out the I/O module repair instructions.
c. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
f. Click Continue.
The System Detail page appears and displays the repair progress, letting you know that it is waiting for the designated I/O Module to
be replaced. A yellow alert box confirms that the system is in maintenance mode, still logging alerts, but not sending out notifications.
You can click on View Service details in the lower right of the gray banner. It displays instructions, location information and links to the
video and written repair information. Notice that the task progress bar pauses and displays, “Service the I/O module now.” This is your
cue to physically remove the I/O Module.
Software Pre-Procedure 75
Hardware Procedure
CAUTION: To avoid overheating your storage system, you have 10 minutes to replace the I/O Module, once the failed I/O Modules has
been removed from its bay.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
CAUTION: To prevent data loss, confirm that the data cabling provides a redundant path to other enclosures and nodes.
2. Confirm that the UID Locate LED is solid blue on the I/O Module to be replaced. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the I/O Module are labeled, noting their locations. If not labeled, label them with Node, Slot
and Port information.
4. Remove data cables by pulling on the release tab and disconnecting the data cable.
CAUTION! Make sure that all cables are out of the way before proceeding.
5. Fully loosen both captive thumbscrews that secure the I/O Module release levers to the enclosure bay. If too tight, use a Torx T-15
screwdriver to loosen the thumbscrews.
6. Open both release levers simultaneously to disengage the I/O Module from the enclosure backplane and partially slide the I/O
Module out if its bay.
7. While supporting the I/O Module from underneath, fully slide it out of its bay.
CAUTION! Confirm the area in front of the empty bay is clear of any cables that might get snagged or damaged during installation
of the replacement I/O Module.
8. With its release levers in their open position, carefully align and slide the replacement I/O Module into its bay until the release levers
begin to engage.
9. Close the release levers to fully seat the I/O Module into the enclosure backplane.
10. Fully tighten both thumb screws to secure the I/O Module to the enclosure.
11. Replace all the data cables that were removed earlier, making sure that each cable is in the same location and fully connected.
Hardware Procedure 76
Software Post-Procedure
Returning to the UI, notice the status of the UI Task bar. If prompted, click on Complete Service. Be patient, it may take up to 15
minutes for the appropriate firmware to be loaded onto the replacement I/O Module. When done, a green banner alert appears to let
you know that the repair was successful and provides you with an opportunity to review the details of the completed task.
Verify that the Health LED on the I/O Module is lit solid green.
Software Post-Procedure 77
Software Pre-Procedure
The repair procedure has three-parts. One: From the User Interface (UI), Identify the part that needs repair and start the process. Two:
Replace the part, and Three: Return to the UI to verify the replacement is successful.
NOTE: Cable replacement does not require shutting down your HPE Storage System.
When a failure occurs, the Storage System generates alerts. You may get an alert email from the Storage System, and/or from HPE
InfoSight. Alerts will also appear on the UI. Alerts that indicate component failure will include corrective actions and specific part
information for ordering. For a cable failure, the Alert will include information on which cable has failed.
Have the replacement cable close by and ready to be installed.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port. Be sure to attach to a node
other than the one being serviced.
From the UI click on:
1. System
4. Scroll down.
a. You can select to watch the Cable repair video, which is the same as what you’ll see in part 2 of this 3-part procedure.
b. You can also select to read and print out the Cable repair instructions.
c. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
e. Click Continue
The System Detail page appears and displays the repair progress, letting you know that it is waiting for the designated data cable to be
replaced. A yellow alert box confirms that the system is in maintenance mode, still logging alerts, but not sending out notifications. You
can click on View Service details in the lower right of the gray banner. It displays instructions, location information and links to the video
and written repair information. Notice that the task progress bar pauses and displays, “Service the data cable now.” This is your cue to
physically remove the data cable.
Software Pre-Procedure 78
Hardware Procedure
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
CAUTION:To prevent data loss, confirm that the data cabling provides a redundant path to other enclosures and nodes.
2. Confirm that the failed cable is labeled. If not labeled, label both ends with Node, Enclosure, Slot and Port information.
4. Remove both ends of the failed data cable by pulling on the release tab and disconnecting the data cable ends.
5. If new labels were not available, carefully remove the labels from the failed cable and place in the same locations on the replacement
data cable.
6. Verify the destination node port and I/O module port to connect the replacement data cable.
7. Connect each end of the cable to the appropriate ports with each end clicked into place and fully seated.
Hardware Procedure 79
Software Post-Procedure
Returning to the UI, notice the status of the UI Task bar. If prompted, click on Complete Service. When done, a green banner alert
appears to let you know that the repair was successful and provides you with an opportunity to review the details of the completed
task.
Verify that the green LED on the replacement cable port is lit.
Software Post-Procedure 80
Software Pre-Procedure
The upgrade procedure to add an Adapter Card requires adding two identical Adapter Cards, one into the same slot on each node of a
node pair. The process has three-parts, and needs to be followed twice, once for each Adapter Card. One: From the User Interface (UI),
Identify the part that you want to add and start the process. Two: Shut down the target node and add the part, and Three: Return to
the UI to re-start the node and verify the addition is successful. Then start the process over with the other Adapter Card and the other
node in the node pair.
NOTE: While adding a pair of Adapter Cards requires shutting down the nodes and restarting them, one at a time, the
nodes themselves don’t need to be removed, nor does your entire HPE Storage System ever need to be shut down.
CAUTION! Shutting down a node reduces system redundancy, so it is recommended to schedule this procedure during lower activity
times.
Unpack the pair of new adapter cards onto an anti-static surface close by, ready to install, but leaving them in their static dissipative
bags.
From the UI click on:
1. System
Select which slot: As an example, 4, (the actual node and slot chosen is specific to your environment)
4. Scroll down. You can select to watch upgrade adapter card video, which is the same as what you’ll see in part 2 of this 3-part
procedure. You can also select to read and print out the upgrade adapter card instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
6. Check the node and slot location for the addition of the Adapter Card.
7. Read the Warning to let you know that the node will need to be shut down for the addition of the Adapter card.
8. Notice the message, “No single path hosts found”. This tells us that no host is connected to this node and only this node. If that
were the case, data access would be interrupted when the node was shutdown.
Software Pre-Procedure 81
Hardware Procedure
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
2. Confirm that the UID Locate LED is solid blue on the Adapter Card to be replaced. This indicates that it is safe to remove.
3. Confirm that all the cables attached to the Adapter Card are labeled. For best practices, especially in a complex installation, label
with both destinations; node, slot, and port, as well as host or switch and port.
NOTE: For an Adapter Card that contains SFPs, remove each SFP with the fiber cables attached by pulling on the
long release tab underneath the SFP.
5. Fully loosen the single captive thumb screw that secures the Adapter Card to the node. If too tight, use a Torx T10 screwdriver to
loosen the thumb screw.
6. While supporting the Adapter Card from below, pull the captive thumb screw to unseat the Adapter Card from its backplane in the
node, and carefully slide the Adapter Card out of the its slot in the node and place on the antistatic surface.
7. Compare the replacement Adapter Card with the failed Adapter Card to confirm they match.
8. Carefully align and slide the replacement Adapter Card into its node slot and press firmly to fully seat it.
CAUTION! Always press on the edges of the Adapter card bezel to seat the card. Do not press on the thumbscrew, the SFPs, or the
SFP receptacles. Notice that the card bezel is slightly recessed into its slot.
NOTE: You may need to press very firmly to fully seat some adapter cards.
10. Replace all the cables, and if applicable, SFPs, that were removed earlier, making sure that each cable is in the same location and
fully connected.
CAUTION! Use care not to bend and cause damage to your optical cables.
Hardware Procedure 82
Software Post-Procedure
NOTE: No cables should be added to the Adapter cards until both cards are installed.
Software Post-Procedure 83
Software Pre-Procedure
Adding Physical Drives to your HPE Storage System requires that you add them in pairs of the exact same type and capacity, and in
adjacent slots. If more than two Physical drives (PDs) are being added, pay attention to the guidelines for how drives are installed,
allocated, and balanced for optimal performance and reliability.
Balance the number of drives of each type across all the enclosures without breaking the rule of having an even number of the same
type of drive in each enclosure. When adding drives, they need to be spread evenly across enclosures without splitting up pairs of
drives.
Only NVMe drives can be installed, and they are installed from left to right, slots 0 through 23, never leaving an empty slot. Each node
pair must contain a minimum of 8 drives. Additional drives must be of the same type and capacity and be added in pairs. If both node
pairs are installed, the drives should be distributed as evenly as possible while keeping drive pairs together. For example, if a 4-node
array contains 16 drives, 8 for each node pair, the first two drives added to the storage system would be added to one of the node-
pairs. Adding one drive to each node pair would create an uneven and therefore unsupported configuration. When two additional
drives are added they would be added to the other node pair so that both node pairs would contain 10 drives each.
The upgrade procedure to add drives has three-parts,
One: From the User Interface (UI), Identify the part that you want to add and start the process. Two: Add the part, and Three: Return to
the UI to verify the addition is successful. Adding drives does not require your HPE Storage System to be shut down.
CAUTION! Once the Drive Blanks are removed, you will have up to 10 minutes to install the new drives to avoid overheating your
Storage System.
With that in mind, unpack the drives onto an anti-static surface close by, ready to install.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
4. Scroll down. You can select to watch Physical drive addition video, which is the same as what you’ll see in part 2 of this 3-part
procedure. You can also select to read and print out the Physical drive addition instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
7. Click Continue.
The System Detail page appears and displays the upgrade progress. A yellow alert box confirms that the system is in maintenance
mode, still logging alerts, but not sending out notifications. Notice that the progress has paused and is waiting for you to add the
physical drives. This is your cue to begin the physical procedure.
Software Pre-Procedure 84
Hardware Procedure
1. Remove the bezel from the front of the enclosure where drives will be added, by squeezing the release on the left side and pivoting
the bezel off the front of the enclosure.
2. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
CAUTION! Once the drive blanks are removed, you will have up to 10 minutes to install the new drives to avoid overheating of your
Storage System.
3. Remove the drives blanks from the slots where the drives will go and retain for future use.
4. For each drive, press the release button to open the drive handle.
5. Align the drive with release button at the top of the drive.
7. Press the drive handle to seat the drive into the midplane and lock it into place.
9. Confirm all the blinking green LEDs on the newly installed drives turn solid, indicating their integration into the Storage System.
Once normal drive activity starts the green status LED will blink again.
10. Next, toe-in the right end of the enclosure bezel, squeeze the retention latch on the other end and gently but firmly press the bezel
into place.
Hardware Procedure 85
Software Post-Procedure
Returning to the UI, click on “Complete service” to let the storage system know that all the drives have been added to complete the
upgrade. Maintenance mode ends and checkhealth is run to confirm the healthy running of the Storage System.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears to let
you know that the upgrade was successful and provides you with an opportunity to review the details of the completed task.
Software Post-Procedure 86
Software Pre-Procedure
The procedure to upgrade a node pair expands a 2-node storage system into a 4-node storage system. You will add two controller
nodes, at least two matching adapter cards, two Power Cooling Battery Modules, PCBMs, and a minimum of eight drives for the new
node pair added to the 4U enclosure. The number of new drives added should continue the balanced distribution of drives throughout
all enclosures. As with the existing nodes, only NVMe drives can be added with the new node-pair. The process begins with the User
Interface (UI) to initiate the process, followed by the physical addition of the hardware in stages while using the UI to monitor and cue
tasks along to way.
Throughout the process your HPE Storage System never needs to be shut down.
CAUTION! Some of the tasks in the procedure need to be done quickly to avoid overheating your storage system.
With that in mind, unpack all of your hardware and keep it close by on an anti-static surface.
HPE and other authorized service personnel can access the UI by connecting directly to the Service Eth port.
From the UI click on:
1. System
4. Scroll down. You can select to watch upgrade node pair video, which is the same as what you’ll see in part 2 of this 3-part
procedure. You can also select to read and print out the upgrade node pair instructions.
5. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
7. Click Continue.
The System Detail page appears and displays the upgrade progress. A yellow alert box confirms that the system is in maintenance
mode, still logging alerts, but not sending out notifications. You can click on View Service details in the lower right of the gray banner. It
displays instructions and links to the video and written upgrade information. Notice that the progress has paused and is waiting for
you to insert one of the new nodes. This is your cue to begin preparing the hardware.
Software Pre-Procedure 87
Hardware Procedure
1. Attach your ESD wrist strap to the grounded anti-static surface on which the two unpacked controller nodes and Adapter Cards
rest.
2. Add all Adapter Cards, making sure that the two additional nodes contain identical cards in the same slots. For each Adapter Card:
Carefully align and slide the Adapter Card into its node slot and press firmly to fully seat it.
CAUTION! Always press on the edges of the Adapter card bezel to seat the card. Do not press on the thumbscrew, the SFPs, or
the SFP receptacles. Notice that the card bezel is slightly recessed into its bay.
NOTE: You may need to press very firmly to fully seat some adapter cards.
CAUTION! The following tasks need to be done in order and quickly to avoid overheating your storage system.
3. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
4. From the front of the Storage System, add a minimum of eight drives to the node enclosure in front of where the new node pairs will
be installed. The number of new drives added should continue the balanced distribution of drives throughout all enclosures.
5. Partially insert both nodes. But leave their release handles in the extended open position. Each node will be fully seated one at a
time when the time comes.
6. NOTE: The node designation for the new controller nodes will continue consecutively from the original nodes, N0 at
the bottom, then N1, followed by N2 and N3 for the new nodes.
8. Label and connect Ethernet connections between the management MGMT ports on both nodes to the network switch.
For best practices, especially in a complex installation, label with both destinations; node, slot, and port, as well as host or switch
and port.
9. Be sure to route the Ethernet cables above the levers to ensure easy insertion of the nodes.
10. Connect power cords to both of the newly installed PCBMs for the new controller nodes.
11. Slide Node 2 into its bay until its release levers begin to engage.
12. Close the release levers simultaneously to fully seat the node into the enclosure backplane.
13. Fully tighten both thumbscrews to secure the controller node to the enclosure.
NOTE: Do not fully insert Node 3 until Node 2 is completely integrated into the storage system.
Hardware Procedure 88
Software Post-Procedure
Return to the UI to monitor the progress. As soon as the node is inserted, it starts automatically, and node rescue is run. The process
takes approximately 10 to 20 minutes. Confirm that the green Status LEDs on the controller node flash in synchronization with the
other controller nodes, indicating that it has joined the cluster. The progress of the node rescue can be monitored through the Activities
page, selecting Tasks, and then “Node 2 Rescue Task”.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a banner alert appears to let you
know that Node 2 has been found and clustered. This is your cue to add the remaining Controller Node.
1. Close the release levers simultaneously to fully seat the node into the enclosure backplane.
2. Fully tighten both thumbscrews to secure the controller node to the enclosure.
Return to the UI to monitor the progress. As soon as the additional node is inserted, it starts automatically and node rescue is run,
followed by the Admit Hardware task. This can take significant time. Be patient. Once all of the hardware has integrated into the
storage system, maintenance mode ends, and checkhealth is run to confirm that the enlarged cluster with the newly installed Node
Pair is healthy. Confirm that the green Status LEDs on the controller node flash in synchronization with the other controller nodes,
indicating that it has joined the cluster.
Throughout this process, the System Task progress bar shows the percentage complete. When done, a green banner alert appears
to let you know that the upgrade has been successful and provides you with an opportunity to review the details of the completed
task.
3. Once both nodes have been integrated into the enlarged cluster, label and connect Fiber Channel cables from your hosts to the
identical SFP ports in each Adapter Card in each Node.
4. Label and connect any additional Host cabling for your configuration.
5. Add the Ethernet and Host cables to the cable organizer loom.
Software Post-Procedure 89
Alletra 2240 Drive Enclosure Addition
The upgrade procedure to add a Drive Enclosure involves installing rail shelves into the rack, the Drive Enclosure itself, drives, a front
bezel, power cabling, data cabling, and finally completing the upgrade through the UI to confirm the added drive enclosure is installed,
integrated and working.
The process begins with installing all the hardware up until the data cabling. Then using the User Interface (UI) to initiate the process,
add the cabling with the help of the Alletra 9000 Cabling Tool, and monitoring the progress and acting on cues, through the UI.
Throughout the process your HPE Storage System never needs to be shut down.
You’ll need a Torx T-15 and T-25, screwdriver.
1. Attach your ESD wrist strap to an unpainted surface of the enclosure or rack.
Decide where in the rack to install your drive enclosure.
Drive enclosures for two node systems are located below the node enclosure.
Drive enclosures for four node systems are located above and below the node enclosure.
Once the location is decided upon, install your rail shelves. In our example, one 2U drive enclosure.
Each rail shelf is labeled as Left or Right, along with the designation of Front or Rear, and includes illustrations for safety screw
locations and how the safety clips work.
For each rail:
Align the rear end with the chosen starting point. Push the clip and guide pins through the rack holes until the black locking clip
snaps into place.
Expand the rail to align and connect to the front end of the rack post.
Confirm that each rail shelf is pulled tight and solidly attached.
WARNING! Verify that the rails are securely latched by attempting to push the black locking clips through the rack holes without
compressing them. If they push through, re-install the rail until you are unable to push them through.
WARNING: Before installing any hardware on the rails, verify that both ends of each rail are secured with the included safety screws.
And, if applicable, hold-down brackets.
If the safety screws are not securely tightened before an enclosure is inserted, the rails may disengage, damaging the equipment or
causing personal harm.
Install a Torx T-25 safety screw to the front of each rail shelf just below the guide pin.
Install either a Torx T-25 safety screw to the rear of each rail shelf OR a hold-down bracket.
If you do not intend to transport the rack with the system installed, insert and tighten a rail safety screw into the rear rack hole of each
rail as shown. The package may contain extra screws.
The placement of the rear safety screw differs on the right and left rail shelf. Refer to the diagram on the rail shelf.
If the RETMA rails are exactly 29 inches apart, as in an HPE factory-integrated rack, and you intend to transport the rack with the
system installed, you need to install hold-down brackets instead of safety screws to secure the rear of the rails.
Install the hold-down brackets and secure to the rear of each rail shelf with two Torx T-25 captive screws.
Repeat the process until all the left and right rail shelves are installed for your drive enclosures.
With the rail shelves installed and secured, the drive enclosure is next.
From the front of the rack, lift, align, and slide in the drive enclosure all the way onto the newly installed rail shelf.
At the front tighten two captive Torx T-25 thumbscrews on each side, into the mounting holes, to secure the enclosure to the rack.
And if hold-down brackets are installed at the rear, insert and tighten the two Torx T-15 screws to further secure the enclosure to the
hold-down brackets on each side of the enclosure.
Install the left and right ears on both sides of the drive enclosure.
If not already installed, you are ready to install your drives.
For more detail on how drives are allocated and balanced; critical to the performance and reliability of your system, refer to your
documentation.
Caution: Treat your drives with care. Avoid shaking or dropping them.
4. Scroll down.
a. You can select to watch the Drive enclosure addition video, which is the same as what you see here.
b. You can also select to read and print out the drive enclosure addition instructions.
c. Read through to be reminded that the array will be put into ‘Maintenance Mode’ to reduce the priority of alerts that occur during
maintenance.
e. Click Continue.
The System Detail page appears and displays the upgrade progress. A yellow alert box confirms that the system is in maintenance
mode, still logging alerts, but not sending out notifications. You can click on View Service details in the lower right of the gray banner. It
displays instructions, location information and links to the video and written upgrade information. Notice the progress is paused and is
waiting for you to add drive enclosures. This is your cue to physically cable the new drive enclosure to your nodes.
Cabling starts with running the HPE Alletra 9000 Cabling Tool, a browser based on-line tool from HPE.
From a browser connect to the online HPE Alletra 9000 Cabling Tool. Refer to your documentation for the link to the cabling tool.
1. Click on “I’m adding drive enclosures”.
2. Click “Next”
3. You are reminded that you should have updated labels to attach to your cables.
4. Click “Next”
5. Select to download your configuration information from HPE InfoSight or upload the config file from your array. Refer to your
documentation for this process. In our example we will select Upload and click “Next”.
6. Drag and drop your config file onto the target box and then when accepted, click “Next”
7. Your current configuration will display. You may need to scroll down to see the bottom of the graphic display. Click “Next”.
9. You will be reminded to ground yourself when physically adding drive enclosures. Click “Next”.
10. You will then be reminded to make sure that you have the appropriate power and cooling for the expansion. Click “Next”.
11. Now, you will have a chance to drag and drop drive enclosures to adjust their placement. You can even add an additional rack if
needed. In our example, we will stick with the default, just below the currently installed node enclosure. Click “Next”.
12. The cabling tool walks you through the labeling of each cable. Since only direct connect cabling is supported at this time, it will be
relatively easy. In our example the red path cable will go from node 0, slot 0, port DP-1 to enclosure 2, slot 0, port DP-1.
13. Click “Next” to see the next cable label. One green path cable will go from node 1, slot 0, port DP-1 to enclosure 2, slot 1, port DP-1.
Apply labels to your cables. Each cable end should include a label for its immediate connection next to the connector and a label
indicating where the other end is connecting to. This allows you to see at a glance both connection destinations.
Before installing any cables, connect your anti-static wrist strap to an unpainted part of the rack or enclosure.
Following the information on the labels connect all of your data cables. For our example:
1. Insert the red path data cable in node 0, slot 0, port DP-1. It should click into place when fully seated. Pull gently to confirm that it is
fully connected.
2. Connect the other end of the red path data cable into enclosure 2, slot 0, port DP-1. Again, make sure that it clicks into place and is
fully connected.
3. Follow the same process to connect the green path cables according to the labels on the cables, node 1, slot 0, port DP-1 to
enclosure 2, slot 1, port DP-1.
Returning to the UI, notice the status of the UI Task bar. Throughout this process of new hardware admitted to the system, the System
Task progress bar shows the percentage complete. The processes takes between 5 and 15 minutes. If errors are encountered, you will
be prompted to fix them and then continue. Maintenance mode ends and CheckHealth is run to confirm the healthy running of the
storage system. When done, a green banner alert appears to let you know that the upgrade was successful and provides you with an
opportunity to review the details of the completed task.