0% found this document useful (0 votes)
103 views

IBM - IBM Storage Scale 5.1.9 Protocols Quick Overview (2023)

Uploaded by

Ramon Barrios
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views

IBM - IBM Storage Scale 5.1.9 Protocols Quick Overview (2023)

Uploaded by

Ramon Barrios
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

IBM Storage Scale 5.1.9 Protocols Quick Overview November 03, 2023.

Before strarting Cluster installation Protocol and file system deployment Configuration Upgrade and cluster additions
Always start here to understand: Start here if you would like to: Start here if you already have a cluster and Start here if you already have a cluster with Always start here to understand:
• Common prerequisites • Create a new cluster from scratch would like to: protocols enabled and would like to: • Upgrade guidance
• Basic Install Toolkit operation • Add and install new GPFS nodes to an existing • Add or enable protocols on existing cluster • Check cluster state and health, basic logging • How to add nodes, NSDs, FSs, protocols, to an
• Requirements when an existing cluster exists, cluster (client, NSD, GUI) nodes or debugging existing cluster
both with or without an ESS • Create new NSDs on an existing cluster • Create a file system on existing NSDs • Configure a basic SMB or NFS export
• Configure and enable file audit logging or Important note: CES Swift Object feature is not supported
from IBM Storage Scale 5.1.9 onwards. For more information,
watch folders see Stabilized, deprecated, and discontinued features in IBM
Storage Scale.

1. How does the Install Toolkit work? 1. Setup the node that will start the installation 1. Setup the node that will start the installation 1. Path to binaries 1. Upgrading 5.0.x.x to 5.1.9.x
IBM Storage Scale Install Toolkit operation can be summarized in 4 Pick an IP that exists on this node and is accessible to or from all Setup is necessary unless spectrumscale setup was previously Add the following PATH variable to your shell profile to allow conve- a. Extract the 5.1.9.x IBM Storage Scale package:
phases: nodes via promptless SSH: run on this node for a past GPFS installation or protocol deployment. nient access of GPFS mm commands: ./Spectrum_Scale_Data_Management-5.1.9.x-Linux
Pick an IP that exists on this node and is accessible to or from all b. Setup and configure the Install Toolkit:
I. User input via spectrumscale commands ./spectrumscale setup -s IP nodes via promptless SSH: export PATH=$PATH:/usr/lpp/mmfs/bin . spectrumscale setup -s <IP of installer node>
II. A spectrumscale install phase . spectrumscale config populate -N <any cluster
III. A spectrumscale deploy phase - ESS: If the spectrumscale command is being run on nodes in a ./spectrumscale setup -s IP 2. Basic GPFS health node>
IV. A spectrumscale upgrade phase cluster with an ESS, make sure to switch to ESS mode (see page 2 for mmgetstate -aL If config populate is incompatible with your cluster configu-
ESS examples): ./spectrumscale setup -s IP -st ess - ESS: If the spectrumscale command is being run on nodes in a mmlscluster ration, you must manually add the nodes and configure to the
Each phase can be run again at later points in time to introduce new - ECE: If the spectrumscale command is being run on an Erasure cluster with an ESS, make sure to switch to ESS mode (see page 2 for mmlscluster --ces Install Toolkit.
nodes, protocols, NSDs, file systems, or updates. Code Edition cluster, make sure to switch to ECE mode: ./spec- ESS examples): ./spectrumscale setup -s IP -st ess mmnetverify
trumscale setup -s IP -st ece - ECE: If the spectrumscale command is being run on an Erasure If desired, enable the prompt to users to shut down their work-
All user input via spectrumscale commands is recorded into a Code Edition cluster, make sure to switch to ECE mode: ./spec- loads before starting the upgrade: ./spectrumscale upgrade
cluster definition file in: /usr/lpp/mmfs/5.1.9.0/ansi- 3. CES service and IP check config workload -p on:
Ansible is a prerequisite for using the spectrumscale command. trumscale setup -s IP -st ece mmces address list
ble-toolkit/ansible/vars ./spectrumscale node lis
2. Populate the cluster mmces service list -a
If a cluster pre-exists, the Install Toolkit can automatically traverse Ansible is a prerequisite for using the spectrumscale command. ./spectrumscale nsd lis
Each phase acts upon all nodes inputted into the cluster definition mmhealth cluster show
the existing cluster and populate its clusterdefinition.txt file 2. Populate the cluster ./spectrumscale filesystem lis
file. For example, if you only want to deploy protocols in a cluster mmhealth node show -N all -v
with the current cluster’s configuration details. Optionally, the Install Toolkit can automatically traverse the existing ./spectrumscale config gpf
containing a mix of unsupported and supported operating systems, mmhealth node show <component> -v
cluster and populate its clusterdefinition.txt file with ./spectrumscale config protocol
input only the supported protocol nodes and leave all the other nodes Point it at a node within the cluster with promptless SSH access to all mmces events list -a
current cluster details. Point it at a node within the cluster with ./spectrumscale upgrade prechec
out of the cluster definition. other cluster nodes: ./spectrumscale upgrade run
promptless SSH access to all other cluster nodes: 4. Authentication
./spectrumscale config populate -N hostname 2. Upgrading 5.1.9.x to future PTFs
Note: To deploy a cluster in the cloud (AWS or GCP), use the cloud- mmuserauth service list
If you are in ESS mode, point config populate to the EMS: ./spectrumscale config populate -N hostname mmuserauth service check Follow the same procedure as indicated in the previous item.
kit command, which is available at: /usr/lpp/mmfs/your_s-
cale_version/cloudkit. This quick guide does not cover the ./spectrumscale config populate -N ems1 3. Upgrade compatibility with LTFS-EE
If you are in ESS mode, point config populate to the EMS: a. ltfsee stop (on all LTFSEE nodes)
cloudkit usage. For more information, see cloudkit command. 5. Call home
Note: Consult the limitations of the config populate command. mmcallhome info list b. umount /ltfs (on all LTFSEE nodes)
./spectrumscale config populate -N ems1 c. dsmmigfs disablefailover (on all LTFSEE nodes)
2. Hardware or performance sizing Note: To remotely mount a file system using the Install Toolkit, use mmcallhome group list
remote_mount. This quick guide does not cover the remote_mount mmcallhome status list d. dsmmigfs stop (on all LTFSEE nodes)
Work with your IBM account team or Business Partner for suggestions Note: Consult the limitations of the config populate command.
command. For more information, see spectrumscale command. e. systemctl stop hsm.service (on all LTFSEE nodes)
on the best configuration possible to fit your environment. Also, see
6. File protocols (NFS and SMB) f. Upgrade using the Install Toolkit
IBM Storage Scale FAQ for related information.
3. Add NSD server nodes (non-ESS nodes) 3. Add protocol nodes g. Upgrade LTFS-EE if desired
3. OS levels and CPU architecture Make sure that all file systems to be used with protocols have nfs4
Adding NSD nodes is necessary if you would like the install toolkit to ./spectrumscale node add hostname -p h. Reverse steps e through a and restart/enable
The Install Toolkit supports these operating systems: ACLs and locking in effect. Protocols do not work correctly without
configure new NSDs and file systems. ./spectrumscale node add hostname -p 4. Upgrade compatibility with TCT
this setting in place.
... a. Stop TCT on all nodes prior to the upgrade mmcloudgateway
- x86: ./spectrumscale node add hostname -n
Check with: service stop -N Node | Nodeclass
RHEL 7.x / 8.x / 9.x, SLES 15, Ubuntu 22.04 ./spectrumscale node add hostname -n 4. Assign protocol IPs (CES-IPs) mmlsfs all -D -k b. Upgrade using the Install Toolkit
- ppc64 LE: … Add a comma-separated list of IPs to be used specifically for cluster c. Upgrade the TCT rpms manually, then restart the TCT
RHEL 7.x / 8.x / 9.x
export services such as NFS and SMB. Reverse DNS lookup must be in Example NFS export creation: 5. Offline upgrade using the Install Toolkit
- s390x:
4. Add NSDs (non-ESS devices) place for all IPs. CES IPs must be unique and different from cluster mkdir /ibm/fs1/nfs_export1 The Install Toolkit supports offline upgrade of all nodes in the cluster
RHEL 7.x / 8.x / 9.x , SLES 15, Ubuntu 22.04
NSDs can be added as non-shared disks seen by a primary NSD node IPs. or a subset of nodes in the cluster. This is useful for 4.2.3.x - 5.1.x.x
server. NSDs can also be added as shared disks seen by a primary and mmnfs export add /ibm/fs1/nfs_export1 -c "*(Access_- upgrades. It is also useful when nodes are unhealthy and cannot be
All cluster nodes that the Install Toolkit acts upon must be part of the
multiple secondary NSD servers. ./spectrumscale config protocols -e EXPORT_IP_POOL Type=RW,Squash=no_root_squash,SecType=sys,Proto- brought into a healthy or active state for upgrade. Parallel offline
same CPU architecture and endianness.
cols=3:4)" upgrade of all nodes in the cluster is also supported.
In this example, we add 4 /dev/dm disks seen by both primary and Note: All protocol nodes must see the same CES-IP networks. If a. Check the upgrade configuration:
All protocol nodes must have the same OS and architecture.
secondary NSD servers: CES-groups are to be used, apply them after the deployment is suc- mmnfs export list ./spectrumscale upgrade config list
cessful. b. Add nodes that are already shutdown:
Refer to the IBM Storage Scale FAQ.
./spectrumscale nsd add -p primary_nsdnode_hostname Example SMB export creation: ./spectrumscale upgrade config offline -N
4. Repositories -s secondary_nsdnode_hostname /dev/dm-1 /dev/dm-2 5. Verify file system mount points are as expected mkdir /ibm/fs1/smb_export1 <node1,node2>
A base repository must be set up on every node. For RHEL 8 and 9, /dev/dm-3 /dev/dm-4 ./spectrumscale filesystem list ./spectrumscale upgrade config list
also setup the AppStream repository.
chown "DOMAIN\USER" /ibm/fs1/smb_export1 c. Start the upgrade:
Note: Skip this step if you set up file systems or NSDs manually and ./spectrumscale upgrade precheck
- RHEL check: yum repolist, dnf repolist 5. Define file systems (non-ESS)
not through the Install Toolkit. mmsmb export add smb_export1 /ibm/fs1/smb_export1 ./spectrumscale upgrade run
- SLES check: zypper repos File systems are defined by assigning a file system name to one or
- Ubuntu check: apt edit-sources more NSDs. File systems are defined but not created until this install --option "browseable=yes" 6. Upgrading subsets of nodes (excluding nodes)
is followed by a deployment. 6. Configure protocols to point to a shared root file system The Install Toolkit supports excluding groups of nodes from the
location mmsmb export list upgrade. This allows for staging cluster upgrades across multiple
5. Firewall and networking, and SSH
In this example, we assign all 4 NSDs to the fs1 file system: A CES directory gets automatically created at the root of the specified windows. For example, upgrading only NSD nodes and then, at a later
Make sure that:
file system mount point. This is used for protocol admin/config and 7. Performance monitoring time, upgrading only protocol nodes. This is also useful if specific
./spectrumscale nsd list needs >= 4 GB free. Upon completion of protocol deployment, GPFS systemctl status pmsensors nodes are down and unreachable. See IBM Documentation to learn
- All nodes are networked together and pingable via IP, FQDN, and
./spectrumscale filesystem list configuration points to this as cesSharedRoot. It is recommended systemctl status pmcollector about limitations.
hostname.
- Reverse DNS lookup are in place. ./spectrumscale nsd modify nsd1 -fs fs1 that cesSharedRoot be a separate file system. mmperfmon config show
a. Check the upgrade configuration:
- If /etc/hosts is used for name resolution, ordering within is: IP ./spectrumscale nsd modify nsd2 -fs fs1 mmperfmon query -h
./spectrumscale upgrade config list
FQDN hostname. ./spectrumscale nsd modify nsd3 -fs fs1 ./spectrumscale config protocols -f fs1 -m /ibm/fs1
b. Add nodes that are NOT to be upgraded:
- Promptless SSH is setup between all nodes and themselves by using ./spectrumscale nsd modify nsd4 -fs fs1 8. File audit logging ./spectrumscale upgrade config exclude -N
IP, FQDN, and hostname. Note: If you setup file systems or NSDs manually, perform a manual File audit logging (FAL) is available only with Data Managemert and <node1,node2>
If desired, multiple file systems can be assigned at this point. See check of <mmlsnsd> and <mmlsfs all -L> to make sure all Advanced editions of IBM Storage Scale. ./spectrumscale upgrade config list
Firewalls should be turned off on all nodes. Else, specific ports must IBM Documentation for details on “spectrumscale nsd modify”. We NSDs and file systems required by the deployment are active and a. Enable and configure using the Install Toolkit as follows: c. Start the upgrade:
be opened both internally (for GPFS and the installer) and externally recommend a separate file system for shared root to be used with mounted before continuing. ./spectrumscale fileauditlogging enable ./spectrumscale upgrade precheck
(for the protocols). See IBM Documentation for more details before protocols. ./spectrumscale filesystem modify —fileauditlog- ./spectrumscale upgrade run
proceeding. 6. Add GPFS client nodes 7. Enable the desired file protocol gingenable gpfs1 d. Prepare to upgrade the previously excluded nodes:
6. Time synchronization among nodes is required ./spectrumscale node add hostname ./spectrumscale enable nfs ./spectrumscale fileauditlogging list ./spectrumscale upgrade config list
A consistent time must be established on all nodes of the cluster. ./spectrumscale enable smb ./spectrumscale filesystem modify —logfileset ./spectrumscale upgrade config exclude --clear
The installer assigns quorum and manager nodes by default. Refer to <LOGFILESET> retention <days> gpfs1 ./spectrumscale upgrade exclude -N <already_up-
7. Cleanup prior SMB and NFS b. Install the file audit logging rpms on all nodes:
Prior implementations of SMB and NFS must be completely removed IBM Documentation if a specific configuration is desired. 8. Configure call home graded_nodes>
7. Add Storage Scale GUI nodes Call home is enabled by default within the Install Toolkit. Refer to the ./spectrumscale install --precheck e. Start the upgrade:
before proceeding with a new protocol deployment. Refer to the
./spectrumscale node add hostname -g -a call home settings and configure mandatory options for call home: ./spectrumscale install ./spectrumscale upgrade precheck
cleanup guidance in IBM Documentation.
… c. Deploy the file audit logging configuration *gpfs.adv.* or gpfs.dm.* ./spectrumscale upgrade run
8. If a GPFS cluster pre-exists ./spectrumscale callhome config -h rpms must be installed on all nodes:
Proceed to the “Protocol deployment” section if: 7. Resume of a failed upgrade
The management GUI automatically starts after installation and ./spectrumscale deploy --precheck
If an Install Toolkit upgrade fails, it is possible to correct the failure
allows for further cluster configuration and monitoring. Alternatively, disable call home: ./spectrumscale deploy
a. File systems are created and mounted ahead of time, and nfs4 and resume the upgrade without needing to recover all nodes or
d. Check the status:
ACLs is in place 8. Configure performance monitoring services. Resume with: ./spectrumscale upgrade run.
./spectrumscale callhome disable mmhealth node show FILEAUDITLOG -v
b. SSH promptless access among all nodes is set Configure performance monitoring consistently across nodes.
mmhealth node show MSGQUEUE -v 8. Handling Linux kernel updates
c. Firewall ports are open mmaudit all list The GPFS portability layer must be rebuilt on every node that under-
d. CCR is enabled ./spectrumscale config perfmon -r on 9. Review your configuration goes a Linux kernel update.
./spectrumscale node list mmaudit all consumeStatus -N <node list>
e. You configured mmchconfig release=LATEST mmwatch all list Apply the kernel, reboot, rebuild the GPFS portability layer on each
f. The installed GPFS rpms match the exact build dates of those 9. Configure call home ./spectrumscale deploy --precheck
node with this command before starting GPFS: /usr/lp-
included within the protocols package Call home is enabled by default within the Install Toolkit. Refer to the
10. Start the deployment 9. Logging and debugging p/mmfs/bin/mmbuildgpl
9. If an ESS is part of the cluster call home settings and configure mandatory options for call home:
Installation or deployment:
Proceed to the “Cluster installation” section to use the Install Toolkit ./spectrumscale deploy Or: mmchconfig autoBuildGPL=yes and mmstartup
to install GPFS and add new nodes to the existing ESS cluster. Pro- ./spectrumscale callhome config -h
Upon completion, you have protocol nodes with active cluster export /usr/lpp/mmfs/5.1.9.x/ansible-toolkit/logs 9. Adding to the installation
ceed to the “Protocol deployment” section to deploy protocols. The following procedures can be combined to reduce the number of
Alternatively, disable the call home: services and IPs. Performance monitoring tools are also ready for use
at this time. Verbose logging for all spectrumscale commands by adding a ‘-v’ installations and deployments necessary.
a. CCR must be enabled immediately after ./spectrumscale:
b. EMS nodes must be in the ems nodeclass. I/O nodes must be in ./spectrumscale callhome disable To add a node:
their own nodeclass: gss or gss_ppc64 Deploy can be re-run in the future to: a. Choose one or more node types to add:
- Enable additional protocols /usr/lpp/mmfs/5.1.9.x/ansible-toolkit/spectrumscale Client node: ./spectrumscale node add hostname
c. GPFS on the ESS nodes must be at minimum 5.0.5.x 10. Name your cluster -v <cmd> GPFS default log location:/var/adm/ras/ NSD node: ./spectrumscale node add hostname -n
d. All quorum and quorum-manager nodes are recommended to be ./spectrumscale config gpfs -c my_cluster_name - Add additional protocol nodes (run install first to add more nodes)
Protocol node: ./spectrumscale node add hostname -p
at the latest levels possible - Enable and configure or update call home settings
Enabling Linux system log or journal is recommended. GUI node: ./spectrumscale node add hostname -g -a
e. A CES shared root file system has been created and mounted on 11. Review your configuration ... Repeat for as many nodes as you’d like to add.
the EMS b. Install GPFS on the new nodes:
./spectrumscale node list 10. Data Capture for Support ./spectrumscale install -pr
10. Protocols in a stretch cluster ./spectrumscale nsd list A consistent time must be established on all nodes of the cluster. ./spectrumscale install
Refer to the stretch cluster use case within the IBM Documentation. ./spectrumscale filesystem list c. If a protocol node is being added, also run deployment:
11. Extract IBM Storage Scale package ./spectrumscale config gpfs --list System-wide data capture: ./spectrumscale deploy -pr
./spectrumscale install --precheck ./spectrumscale deploy
There is no protocols-specific package. Any standard, advanced, data /usr/lpp/mmfs/bin/gpfs.snap
access, or data management package is now sufficient for protocol To add an NSD:
deployment. Extracting the package presents a license agreement. 12. Start the installation Installation/Deploy/Upgrade specific: a. Verify that the NSD server that connects this new disk exists within
./spectrumscale install the cluster.
/usr/lpp/mmfs/5.1.9.x/ansible-toolkit/install- b. Add the NSD(s) to the install toolkit:
.Spectrum_Scale_Data_Management-5.1.9.x-<arch>-Li- er.snap.py ./spectrumscale nsd add -h
nux-install After completion, you have an active GPFS cluster with available ... Repeat for as many NSDs as you’d like to add.
12. Explore the spectrumscale help NSDs, file systems, performance monitoring, time synchronization, c. Run an install:
From /usr/lpp/mmfs/5.1.9.x/ansible-toolkit, use the -h call home, and a GUI. File systems are fully created and protocols ./spectrumscale install -pr
installed in the next stage, deployment. ./spectrumscale install
flag.
To add a file system:
./spectrumscale -h Install can be re-run in the future to: a. Verify free NSDs exist and are known to the install toolkit.
./spectrumscale setup -h - Add GUI nodes b. Define the file system:
- Add NSD server nodes ./spectrumscale nsd list
./spectrumscale node add -h ./spectrumscale nsd modify nsdX -fs file_sys
./spectrumscale config -h - Add GPFS client node
tem_name
./spectrumscale config protocols -h - Add NSDs c. Deploy the new file system:
- Add file systems ./spectrumscale deploy -pr
13. FAQ and quick reference - Enable and configure or update callhome settings ./spectrumscale deploy

See the IBM Storage Scale Quick Reference. To enable another protocol:
See the “Protocol and file system deployment” section. Proceed with
Refer to the IBM Storage Scale FAQ page. steps 7, 8, 9, 10. Note that some protocols require removal of the
Authentication configuration prior to enablement.

** URLs are subject to change **


Examples November 03, 2023.

Example of readying Red Hat Linux nodes for Storage Scale installation and deployment of protocols Example of a new IBM Storage Scale cluster installation followed by a protocol deployment
Configure promptless SSH (promptless ssh is required): Install Toolkit commands for Installation
# ssh-keygen • Toolkit is running from cluster-node1 with an internal cluster network IP of 10.11.10.11, which all nodes can reach: cd /usr/lpp/mmfs/5.1.9.x/ansible-toolkit/
# ssh-copy-id <FQDN of node>
# ssh-copy-id <IP of node> ./spectrumscale setup -s 10.11.10.11
# ssh-copy-id <non-FQDN hostname of node> ./spectrumscale node add cluster-node1 -a -g
./spectrumscale node add cluster-node2 -a -g
Repeat on all nodes to all nodes, including current node. ./spectrumscale node add cluster-node3
./spectrumscale node add cluster-node4
Turn off firewalls (alternative is to open ports specific to each Storage Scale functionality): ./spectrumscale node add cluster-node5 -n
#systemctl stop firewalld ./spectrumscale node add cluster-node6 -n
# systemctl disable firewalld ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 1 "/dev/sdb"
./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 2 "/dev/sdc"
Repeat on all nodes. ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdh"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdi"
How to check if a yum repository is configured correctly: ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 "/dev/sdj"
# yum repolist > Should return no errors. It must also show an RHEL7.x base repository. ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 “/dev/sdk"
./spectrumscale config perfmon -r on
Other repository possibilities include a satellite site, a custom yum repository, an RHELx.x DVD .iso, an RHELx.x physical DVD. ./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable.
./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
Use the included local-repo tool to spin up a repository for a base OS DVD (this tool works on RHEL, Ubuntu, SLES): ./spectrumscale config gpfs -c mycluster
# cd /usr/lpp/mmfs/5.1.9.x/tools/repo ./spectrumscale node list
# cat readme_local-repo | more ./spectrumscale install --precheck
# ./local-repo --mount default --iso /root/RHEL7.9.iso ./spectrumscale install

What if I don't want to use the Install Toolkit? How do I get a repository for all the IBM Storage Scale rpms? Install Outcome: A 6 node Storage Scale cluster with active NSDs
# cd /usr/lpp/mmfs/5.1.9.x/tools/repo • 2 GUI nodes
# ./local-repo --repo • 2 NSD nodes
# yum repolist • 2 client nodes
• 10 NSDs
Preinstall prerequisite rpms to make installation and deployment easier: • Configured performance monitoring
# yum install kernel-devel cpp gcc gcc-c++ glibc sssd ypbind openldap-clients krb5-workstation • Callhome configured
• 3 file systems created, each with 2 failure groups
Turn off SELinux (or set to permissive mode):
# sestatus
# vi /etc/selinux/config

Change SELINUX=xxxxxx to SELINUX=disabled. Save and reboot. Repeat on all nodes.

Setup a default path to IBM Storage Scale commands (not required):


#vi /root/.bash_profile
——add this line——
export PATH=$PATH:/usr/lpp/mmfs/bin
——save/exit——

Log out and back in for these changes to take effect.


Install Toolkit commands for protocol deployment (assumes cluster created from the previous configuration)
• Toolkit is running from the same node that performed the install above, cluster-node1.

./spectrumscale node add cluster-node3 -p


./spectrumscale node add cluster-node4 -p
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
./spectrumscale enable nfs
./spectrumscale enable smb
./spectrumscale node list
./spectrumscale deploy --precheck
./spectrumscale deploy

Example of readying Red Hat Linux nodes for Storage Scale installation and deployment of protocols Deploy outcome
• 2 Protocol nodes
Starting point • Active SMB and NFS file protocols
• If you have a 5148-22L protocol node, stop following these directions and refer to the ESS 5.3.7 (or higher) Quick Deployment Guide.
• The cluster containing ESS is active and online. Next steps
• RHEL7.x/8.x/9.x, SLES 15, or Ubuntu 20.04 is installed on all nodes that are going to serve as protocol nodes. • Configure authentication with mmuserauth
• RHEL7.x/8.x/9.x, SLES 15, or Ubuntu 20.04 base repository is set up on nodes that are going to serve as protocol nodes.
• The nodes that will serve as protocol nodes have connectivity to the GPFS cluster network.

Getting started
1. Create a cesSharedRoot from the EMS: gssgenvdisks --create-vdisk --create-nsds --create-filesystem --contact-node gssio1-hs --crcesfs
2. Mount the CES shared root file system on the EMS node and set it to automount. When done with this full procedure, make sure the protocol nodes are set to automount the CES shared root
file system as well.
3. Use the ESS GUI or CLI to create additional file systems for protocols if desired. Configure each file system for nfsv4 ACLs.
4. Pick a protocol node to run the Install Toolkit from.
5. Locate the Install Toolkit, which is contained within these packages: Storage Scale Standard or Data Access or Advanced or Data Management Edition.
6. Download and extract one of the Storage Scale packages to the protocol node that will run the Install Toolkit. Once extracted, the Install Toolkit is located in the /usr/lp-
p/mmfs/5.1.9.x/ansible-toolkit directory.
Example of adding protocols to an existing cluster
Inputting the configuration into the Install Toolkit with the commands detailed below, involves pointing the Install Toolkit to the EMS node, telling the Install Toolkit about the mount points and Prerequisites configuration
paths to the CES shared root, and designating the protocol nodes and protocol config to be installed/deployed. Refer to the IBM Storage Scale FAQ. • Decide on a file system to use for cesSharedRoot (>=4GB). Preferably, a standalone file system solely for this purpose.
• Take note of the file system name and mount point. Verify the file system is mounted on all protocol nodes.
Intall Toolkit commands • Decide which nodes will be the Protocol nodes.
./spectrumscale setup -s 10.11.10.11 -st ess <- internal GPFS network IP on the current Installer node that can see all protocol nodes • Set aside CES-IPs that are unused in the current cluster and network. Do not attempt to assign the CES-IPs to any adapters.
./spectrumscale config populate -N ems-node <- OPTIONAL. Have the Install Toolkit traverse the existing cluster and auto-populate its config. • Verify each Protocol node has a pre-established network route and IP not only on the GPFS cluster network, but on the same network the CES-IP will belong to. When Protocols are deployed,
./spectrumscale node list <- OPTIONAL. Check the node configuration discovered by config populate. the CES-IPs will be aliased to the active network device matching their subnet. The CES-IPs must be free to move among nodes during failover cases.
./spectrumscale node add ems-node -a -e <- designate the EMS node for the Install Toolkit to use for coordination of the install/deploy • Decide which protocols to enable. The protocol deployment will install all protocols but will enable only the ones you choose.
./spectrumscale node add cluster-node1 -p • Add the new to-be protocol nodes to the existing cluster using mmaddnode (or use the Install Toolkit).
./spectrumscale node add cluster-node2 -p • In this example, we will add the protocol functionality to nodes already within the cluster.
./spectrumscale node add cluster-node3 -p
./spectrumscale node add cluster-node4 -p Install Toolkit commands (toolkit is running on a node that will become a protocol node)
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14 ./spectrumscale setup -s 10.11.10.15 <- internal gpfs network IP on the current Installer node that can see all protocol nodes
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot ./spectrumscale config populate -n cluster-node5 <- pick a node in the cluster for the toolkit to use for automatic configuration
./spectrumscale enable nfs ./spectrumscale node add cluster-node5 -a -p
./spectrumscale enable smb ./spectrumscale node add cluster-node6 -p
./spectrumscale node list <- It is normal for ESS IO nodes to not be listed in the Install Toolkit. Do not add them. ./spectrumscale node add cluster-node7 -p
./spectrumscale install --precheck ./spectrumscale node add cluster-node8 -p
./spectrumscale install <- The install will install GPFS on the new protocol nodes and add them to the existing ESS cluster ./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale deploy --precheck <- It’s important to make sure CES shared root is mounted on all protocol nodes before continuing ./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
./spectrumscale deploy <- The deploy will install / configure protocols on the new protocol nodes ./spectrumscale enable nfs
./spectrumscale enable smb
Intall outcome ./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable
• EMS node used as an admin node by the Install Toolkit, to coordinate the installation ./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
• 4 new nodes installed with GPFS and added to the existing ESS cluster ./spectrumscale node list
• Performance sensors automatically installed on the 4 new nodes and pointed back to existing collector / GUI on the EMS node ./spectrumscale deploy --precheck
• ESS I/O nodes, NSDs/vdisks, left untouched by the Install Toolkit ./spectrumscale deploy

Deploy outcome Deploy outcome


• CES Protocol stack added to 4 nodes, now designated as Protocol nodes with server licenses • CES Protocol stack added to 4 nodes, now designated as Protocol nodes with server licenses
• 4 CES-IPs distributed among the protocol nodes • 4 CES-IPs distributed among the protocol nodes
• Protocol configuration and state data will use the cesSharedRoot file system, which was pre-created on the ESS • Protocol configuration and state data will use the cesSharedRoot file system
• Callhome will be configured

Example of Upgrading protocol nodes or other nodes (not in an ESS)


Example of upgrading protocol nodes or other nodes in the same cluster as an ESS
Pre-upgrade planning
Pre-upgrade planning • Refer to IBM Documentation for supported upgrade paths of IBM Storage Scale nodes
• Refer to IBM Documentation for supported upgrade paths of Storage Scale nodes • Consider whether OS, FW, or drivers on the protocol node(s) should be upgraded and plan this either before or after the install toolkit upgrade
• If you have a 5148-22L protocol node attached to an ESS, please refer to the ESS 5.3.7 (or higher) Quick Deployment Guide • SMB: Requires quiescing all I/O for the duration of the upgrade. Due to the SMB clustering functionality, differing SMB levels cannot co-exist within a cluster at the same time. This requires a
• Consider whether OS, FW, or drivers on the protocol node(s) should be upgraded and plan this either before or after the install toolkit upgrade full outage of SMB during the upgrade.
• SMB: requires quiescing all I/O for the duration of the upgrade. Due to the SMB clustering functionality, differing SMB levels cannot co-exist within a cluster at the same time. This requires a • NFS: Recommended to quiesce all I/O for the duration of the upgrade. NFS experiences I/O pauses, and depending upon the client, mount may disconnect during the upgrade.
full outage of SMB during the upgrade. • Performance Monitoring: Collector(s) may experience small durations in which no performance data is logged, as the nodes upgrade.
• NFS: Recommended to quiesce all I/O for the duration of the upgrade. NFS experiences I/O pauses, and depending upon the client, mount may disconnect during the upgrade.
• Performance Monitoring: Collector(s) may experience small durations in which no performance data is logged, as the nodes upgrade. Intall Toolkit commands
./spectrumscale setup -s 10.11.10.11 -st ss <- Internal GPFS network IP on the current Installer node that can see all protocol nodes
Intall Toolkit commands for IBM Storage Scale 5.0.0.0 or higher ./spectrumscale config populate -N <hostname_of_any_node_in_cluster>
./spectrumscale setup -s 10.11.10.11 -st ess <- Internal GPFS network IP on the current Installer node that can see all protocol nodes Note: If config populate is incompatible with your configuration, add the nodes and CES configuration to the install toolkit manually.
./spectrumscale config populate -N ems1 <- Always point config populate to the EMS node when an ESS is in the same cluster
Note: If config populate is incompatible with your configuration, add the nodes and CES configuration to the install toolkit manually. ./spectrumscale upgrade config workload -p on <- Enable prompt to shut down workloads before upgrade
./spectrumscale node list <- This is the list of nodes the Install Toolkit will upgrade. Remove any non-CES nodes you would rather do manually
./spectrumscale node list <- This is the list of nodes the Install Toolkit will upgrade. Remove any non-CES nodes you would rather do manually ./spectrumscale upgrade precheck
./spectrumscale upgrade precheck ./spectrumscale upgrade run
./spectrumscale upgrade run

** URLs are subject to change **

You might also like