Celerra Basic Administrator Guide
Celerra Basic Administrator Guide
Version 4.1
EMC Corporation
171 South Street Hopkinton, MA 01748-9103 Corporate Headquarters: (508) 435-1000, (800) 424-EMC2 Fax: (508) 435-5374 Service: (800) SVC-4EMC
This equipment generates, uses, and may emit radio frequency energy. The equipment has been type tested and found to comply with the limits for a Class A digital device pursuant to Part 15 of FCC rules, which are designed to provide reasonable protection against such radio frequency interference.
ii
Operation of this equipment in a residential area may cause interference in which case the user at his own expense will be required to take whatever measures may be required to correct the interference.
Any modifications to this device - unless expressly approved by the manufacturer - can void the users authority to operate this equipment under part 15 of the FCC rules.
Software Copyrights
This product incorporates ISE Eiffel 3 Object-Oriented technology from Interactive Software Engineering in Santa Barbara, California. The EMC version of Linux, used as the operating system on the Celerra Control Station, is a derivative of Red Hat Linux. The operating system is copyrighted and licensed pursuant to the GNU General Public License (GPL), a copy of which can be found in the accompanying documentation. Please read the GPL carefully, because by using the Linux operating system on the EMC Celerra File Server, you agree to the terms and conditions listed therein. This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
Trademark Information
EMC2, EMC, MOSAIC:2000, Symmetrix, CLARiiON, and Navisphere are registered trademarks and EMC Enterprise Storage, The Enterprise Storage Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, TimeFinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC Enterprise Storage Specialist, EMC Storage Logic, Universal Data Tone, E-Infostructure, and Celerra are trademarks of EMC Corporation.
EMC, ICDA (Integrated Cached Disk Array), EMC2 (the EMC logo), and Symmetrix are registered trademarks, and EMC Enterprise Storage and Celerra are trademarks of EMC Corporation. Adaptec is a trademark of Adaptec, Inc. BT Tymnet is a registered trademark of British Telecommunication plc. BudTool is a registered trademark of Legato Corporation Intel and Pentium are registered trademarks of Intel Corporation. Internet Explorer is a trademark of Microsoft Corporation. This product incorporates ISE Eiffel 3 Object-Oriented technology from Interactive Software Engineering in Santa Barbara, California. Linux is a registered trademark of Linus Torvalds. Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation. Netscape is a trademark of Netscape Communications Corporation. Network File System and NFS are trademarks of Sun Microsystems, Inc. Red Hat is a registered trademark of Red Hat Software, Inc. Russellstoll is a registered trademark of Midland-Ross Corporation. 3Com is a registered trademark of 3Com Corporation. UNIX is a registered trademark in the United States and other countries and is licensed exclusively through X/Open Company Ltd. VERITAS, VxFS, and NetBackup are registered trademarks of VERITAS Software Corporation. All other trademarks used herein are the property of their respective owners.
iii
iv
Contents
Contents
Celerra File Server Environment ................................................. Network File Sharing Protocols ........................................... Other Network Protocols ...................................................... Celerra Graphical User Interfaces ............................................... Celerra File Server Manager ................................................. Celerra Monitor ...................................................................... Configuring Celerra ...................................................................... System Tasks ........................................................................... Required System Configuration........................................... Enhanced System Configuration.......................................... User Access Configuration Options............................................ Supported Platforms .............................................................. Where to Go From Here................................................................
1-25 1-25 1-26 1-27 1-27 1-28 1-30 1-30 1-30 1-33 1-34 1-34 1-34
Chapter 2
Chapter 3
Power Sequences
Powering Up the Celerra Cabinet ................................................. Planned Power Down ..................................................................... Emergency Shutdown ..................................................................... Powering Up After an Emergency Shutdown ..................... The Command Line Interface ........................................................ Command-Line Parameters .................................................... Logging In......................................................................................... 3-2 3-4 3-6 3-6 3-8 3-8 3-9
Chapter 4
vi
Contents
Chapter 5
Chapter 6
vii
Contents
Chapter 7
Configuring Standbys
Data Mover Availability ................................................................. 7-2 Failover Detection............................................................................ 7-4 How Data Mover Failover Works .......................................... 7-4 Failover Example ...................................................................... 7-5 Configuring Standby Data Movers ............................................... 7-8 When and How to Link ........................................................... 7-8 Failover Policies ........................................................................ 7-8 Standby Data Mover Rules and Restrictions........................ 7-8 Create Standby.......................................................................... 7-9 Activating a Standby ..................................................................... 7-10 Restoring a Primary Data Mover ......................................... 7-12 Verifying Standby Data Movers After an Upgrade ........... 7-13 CIFS Access After Failover.................................................... 7-14 Periodic Tasks.......................................................................... 7-14 For More Information ............................................................ 7-14 Control Station Failover................................................................ 7-15 Control Station Independence .............................................. 7-15 Dual Control Station Configuration .................................... 7-15 Initiating a Control Station Failover ........................................... 7-16 Where to Go From Here................................................................ 7-17
Chapter 8
viii
Contents
Chapter 9
Chapter 10
ix
Contents
Chapter 11
Troubleshooting
Troubleshooting ............................................................................. 11-2 Post-Install Error Messages................................................... 11-2 Volume Troubleshooting ....................................................... 11-4 File System Troubleshooting................................................. 11-5 Data Mover Troubleshooting ................................................ 11-7 Checking Log Files ...................................................................... 11-11 Monitoring System Activity....................................................... 11-12
Appendix A
Technical Specifications
Physical Data .................................................................................. A-2 Environmental Data ...................................................................... A-2 Power Requirements ..................................................................... A-3 Hardware/Software Specifications ............................................. A-4
Appendix B
Customer Support
Overview of Detecting and Resolving Problems ...................... Troubleshooting the Problem ....................................................... Before Calling the Customer Support Center ............................ Documenting the Problem ........................................................... Reporting a New Problem ............................................................ Sending Problem Documentation ............................................... B-2 B-3 B-3 B-4 B-4 B-5
Appendix C
Tables
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 2-1 4-1 4-2 4-3 5-1 6-1 7-1 7-2 8-1 9-1 9-2 9-3 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9
Celerra File Server Software Features and Benefits ................................ 1-5 Control Station Slots ................................................................................... 1-15 Data Mover Slots ........................................................................................ 1-17 Fibre Channel Port Types .......................................................................... 1-20 Fibre Channel Adapter Specifications ..................................................... 1-20 Requirements for Celerra File Server Manager ...................................... 1-28 Minimum Configuration for the Celerra Monitor ................................. 1-29 Celerra File Server Configuration Process .............................................. 1-31 Storage Schemes and Storage Usage ......................................................... 2-3 NICs Used in Celerra ................................................................................... 4-2 server_sysconfig Sample Breakout ............................................................ 4-3 Sample Parameters for an IP Interface ..................................................... 4-4 Volume Types ............................................................................................... 5-2 NFS Export Options ..................................................................................... 6-8 Data Mover Failover .................................................................................... 7-4 Failover Standby Policy Types ................................................................... 7-8 Parameters for Managing GIDs ................................................................ 8-11 Creating ACLs ............................................................................................. 9-11 System Parameters ..................................................................................... 9-16 Server Parameters ....................................................................................... 9-19 Error Message Troubleshooting ............................................................... 11-3 Volume Error Messages ............................................................................. 11-4 File System Error Messages ...................................................................... 11-5 File System Scenarios ................................................................................. 11-6 Data Mover Error Messages ...................................................................... 11-7 Data Mover Scenarios ................................................................................ 11-9 System Log Error Messages and General Scenarios ............................ 11-10 Log Files ..................................................................................................... 11-11 Monitoring System Performance ........................................................... 11-12
xi
Tables
xii
Figures
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 2-1 2-2 3-1 3-2 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 6-1 6-2 7-1 7-2 B-1
Celerra File Server Environment ................................................................ 1-3 Single Enclosure Model ............................................................................... 1-9 Celerra Multi-Cabinet Enclosure .............................................................. 1-10 Hardware Operations ................................................................................ 1-12 Typical Front Panel View of a Control Station ....................................... 1-14 507 Data Mover Front View ...................................................................... 1-16 Fan-Out Topology ...................................................................................... 1-21 Fan-In Topology ......................................................................................... 1-22 NFS and CIFS Software ............................................................................. 1-26 Celerra File Server Configuration Process .............................................. 1-32 Data Mover Volumes ................................................................................... 2-5 Data Mover Organization ........................................................................... 2-6 EPO Power Switch ........................................................................................ 3-2 EPO Circuit Breakers ................................................................................... 3-3 Slice Volumes ................................................................................................ 5-3 Stripe Volumes .............................................................................................. 5-5 Addressing a Stripe Volume ....................................................................... 5-6 Efficient Stripe Configuration ..................................................................... 5-6 Volumes of Unequal Size ............................................................................ 5-7 Unevenly Divisible Volumes ...................................................................... 5-7 Meta Volume Configuration ....................................................................... 5-9 Meta Volume Addressing ......................................................................... 5-10 Slice/Stripe Meta Volume ......................................................................... 5-10 Addressing a Stripe/Slice Meta Volume ................................................ 5-11 Business Continuance Volumes ............................................................... 5-13 NFS File System Configuration .................................................................. 6-5 PC Client Access ......................................................................................... 6-11 Standby Relationship ................................................................................... 7-3 Failover Example .......................................................................................... 7-6 Problem Detection and Resolution Process .............................................. B-2
xiii
Figures
xiv
Preface
As part of its effort to continuously improve and enhance the performance and capabilities of the Celerra File Server product line, EMC from time to time releases new revisions of Celerra hardware and software. Therefore, some functions described in this manual may not be supported by all revisions of Celerra software or hardware presently in use. For the most up-to-date information on product features, see your product release notes. If your Celerra system does not offer a function described in this guide, please contact your EMC representative for a hardware upgrade or software update. Audience This guide is part of the Celerra File Server documentation set and is meant for the System Administrator to use during system setup, configuration, and management. This manual is organized into the following chapters and appendices: Chapter 1, Introducing the Celerra File Server presents an overview of the Celerra File Server and describes the features, benefits, and setup required when using the command-line interface. It also provides an overview of the required configuration process with references to step-by-step instructions contained in other parts of this manual. Chapter 2, Planning for a Celerra File Server, describes how to prepare for Celerra File Server installation. Chapter 3, Power Sequences, provides essential procedures for powering up and powering down the Celerra cabinet.
Organization
xv
Preface
Chapter 4, Configuring Celerra Network Services, describes the initial configuration and verification procedures as well as these network services [Domain Name Service (DNS), Network Information Service (NIS), and Network Time Protocol (NTP)]. Chapter 5, Creating Volumes, File Systems, and Mount Points, describes features that help you manage volumes and describes how to create volumes, file systems, and mount points. Chapter 6, Mounting and Exporting File Systems for the NFS User, describes how to mount and export file systems for NFS environments. Chapter 7, Configuring Standbys, discusses how to configure standby Data Movers and Control Stations. Chapter 8, Managing File Systems, provides procedures for managing file systems. Chapter 9, Managing Your System, provides the procedures for system administration tasks. Chapter 10, Control Station Utilities, describes how to back up and recover the Control Station database and enable daemons. Chapter 11, Troubleshooting, provides troubleshooting scenarios and procedures for managing your system. Appendix A, Technical Specifications, provides the Celerra File Server physical, environmental, hardware, and software specifications. Appendix B, Customer Support, describes the information you should have before contacting EMCs Customer Support Center. Appendix C, GNU General Public License, describes software license information. The Glossary provides definitions of technical terms. Related Documentation Other related EMC publications include:
x x x
Celerra File Server Command Reference Manual Symmetrix Product Manual for your specific Symmetrix model System documentation for your FC4700-2 storage system
EMC uses the following conventions for notes, cautions, warnings, and danger notices in its user documentation:
xvi
Preface
CAUTION A caution contains information essential to avoid damage to the system or equipment. The caution may apply to hardware or software. WARNING A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning. DANGER A danger notice contains information essential to avoid a hazard that will cause severe personal injury, death, or substantial property damage if you ignore the warning. EMC uses the following type style conventions in this guide: Boldface Specific filenames or complete paths. Window names and menu items in text. Emphasis in cautions and warnings. Introduces new terms or unique word usage in text. Command line arguments when used in text. Examples of specific command entries, displayed text, or program listings. Arguments used in examples of command line syntax.
Italic
Type Conventions EMC uses the following type style conventions in this guide: Entries that you type are shown in monospace:
QUERY [CUU=cuu|VOLSER=volser]
xvii
Preface
x x x x
[] = optional entry
Italics = parameter
Obtain technical support by calling your local sales office. For service, call: (800) 782-4362 (SVC-4EMC) or (800) 543-4782 (543-4SVC) and ask for Customer Service. If you are located outside the USA, call the nearest EMC office for technical assistance. These offices are listed at the back of this manual.
For the list of EMC sales locations, please access the EMC home page at:
http://www.emc.com/contact/
For additional information on the EMC products and services available to customers and partners, refer to the EMC Powerlink Web site at:
http://powerlink.emc.com
Your Comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send a message to [email protected] with your opinions of this guide.
xviii
1
Introducing the Celerra File Server
This chapter introduces you to the EMC Celerra File Server and it provides an overview of configuring the server for basic operations.
x x x x x x x x x x
Overview of the Celerra File Server ................................................1-2 Celerra File Server Features..............................................................1-4 Celerra File Server Cabinet ...............................................................1-8 Hardware Components...................................................................1-15 About Celerra Connections ............................................................1-21 Software Components .....................................................................1-26 Celerra File Server Environment ...................................................1-27 Celerra Graphical User Interfaces..................................................1-29 Configuring Celerra.........................................................................1-32 User Access Configuration Options ..............................................1-36
1-1
High-performance Symmetrix Integrated Cached Disk Array (ICDA) storage system FC4700-2 storage system
These powerful combinations produce sharable, network-accessible storage. The Celerra File Server supports Network File System (NFS) and Common Internet File System (CIFS) clients and is easily integrated into existing networks by using standard network interface protocols. The Celerra File Server is easily managed through the command-line interface or a graphical user interface (GUI) known as the Celerra File Server Manager.
Advantages
Enterprises that deploy the Celerra File Server can improve performance and maximize efficiency by:
x
Reducing administrative costs: The Celerra File Server provides a single, centralized hardware and software environment. Increasing data availability: The Celerra File Server provides redundant components that ensure that access is reliable and data is available. Improving server performance: The Celerra File Server includes an optimized operating system that increases server performance and data access speed.
1-2
Figure 1-1 shows the Celerra File Server within a network environment.
Network
Figure 1-1
Limitations
In this release, the minimum configuration required is the Symmetrix 4 storage system or the FC4700-2 storage system. A Celerra File Server cannot be attached to:
x x x
More than one FC4700-2 storage system Both a Symmetrix and an FC4700-2 storage system An FC4700-2 storage system that is also used as a SAN stored array for other servers
1-3
Storage capacity System expansion and scalability Data access and availability Server connections to network users NFS and CIFS compatibility Time Synchronization
The Celerra File Server is attached to a storage system. The capacity of the storage system depends on the model and the configuration. Configurations with multiple storage systems provide greater storage capacity.
Large storage capacity storage system configurations are available by requesting a price quotation (RPQ). In addition, EMC storage system development is on-going, and subsequent models may provide capacities greater than the figures cited here. Contact your service representative to learn about the capacities of the most recent storage system models.
You can increase Celerra File Server performance and capacity by adding:
Increase Performance and Capacity Celerra Data Movers to increase network connections and throughput Memory to improve Data Mover performance Symmetrix Symmetrix disks to increase storage capacity FC4700-2 Disk Array Enclosures (DAE) to increase storage capacity
1-4
The Celerra File Server ensures data availability by providing the software features listed in Table 1-2.
Celerra File Server Software Features and Benefits Feature Standby Data Movers (optional) Benefits This ensures uninterrupted access to data in the unlikely event of a Data Mover failure. Failover support is automatic, quick, and transparent to users accessing the Celerra File Server. Refer to Data Mover Availability on page 7-2 for information about Data Mover failover. This ensures uninterrupted installation and management of the Celerra File Server. Except during configuration activity, Data Movers operate independently from the Control Station. Control Station failure affects only the Celerra File Server installation and management features, not user access to data. In dual Control Station environments, the Control Station can failover. The Celerra File Server minimizes server reboot recovery time by using a metadata log to record file system changes.
Dual Control Stations (optional) Independent Data Mover and Control Station Architecture
1-5
1
Table 1-2
Celerra File Server Software Features and Benefits (continued) Feature Redundant Components Benefits The Celerra File Server provides a full set of redundant critical components. Dual components include redundant data paths within the storage system, dual SCSI or Fibre Channel connections between the storage system and the Data Movers, redundant internal Ethernet between each Data Mover and the Control Station, at least two network paths on each Data Mover, n+1 load-sharing power supplies, on-board battery backup, and dual AC power lines. Refer to Hardware Components on page 1-15 for information about Celerra File Server hardware components. Note: The FC4700-2 storage system uses only Fibre Channel connections. The EMC CallHome and Call-In support features automatically alert an EMC support engineer in a support center, providing remote diagnostics and non intrusive repair. Support centers are staffed 7 days a week, 24 hours a day. Hot-swappable components reduce repair time and increase data availability. Field replaceable components include individual Data Movers, the Control Station, power supplies, battery backup systems, fan subsystems, and all SCSI and Fibre Channel, and power cables.
Celerra File Servers can provide connectivity for the following network types:
x x
FDDI and ATM are supported in earlier versions of the Celerra File Server. For information about FDDI and ATM, refer to the Celerra File Server Network Guide.
The Celerra File Server can accommodate up to 14 Data Movers, each of which can contain two network interface cards (NICs). Refer to Chapter 4, Configuring Celerra Network Services, for more information about using the Celerra File Server with your network.
1-6
The Celerra File Server supports Network File System (NFS) and Common Internet File System (CIFS) protocols, and provides file system support of both UNIX and Windows users. Refer to Chapter 6, Mounting and Exporting File Systems for the NFS User for more information about NFS services available with the Celerra File Server and to the Celerra File Server Windows Environment Configuration Guide for more information about CIFS services. Distributed network systems require time synchronization to ensure accurate timestamping and event executions. The Celerra File Server Network Time Protocol (NTP) allows you to synchronize the internal clock in the Data Mover and Control Stations with an external time source.
Time Synchronization
1-7
Both the storage system and the Celerra File Server are based on industry-standard hardware architectures. These architectures, which combine modular, industry-standard hardware with optimized software, allow you to integrate advances in hardware and software technology quickly and easily.
EMC has several high-performance storage systems which are designed for on-line data storage. For a complete list of EMC system models and capabilities, refer to the appropriate documentation or contact your EMC representative.
Depending on the model, the Celerra File Server consists of one or two cabinets:
x
In single enclosure models, the Celerra File Server and the Symmetrix system share the same cabinet enclosures. The FC4700-2 is not available in a single enclosure. In two-cabinet models, one cabinet contains the Celerra File Server and the other cabinet contains the storage system.
Single Enclosure
The single enclosure is designed to provide a single solution (Celerra and Symmetrix) with a smaller footprint. This enclosure contains the following subsystems and is shown in Figure 1-2:
x
Celerra The Celerra subsystem is built into the lower half of the cabinet and contains slots for up to two Control Stations and four Data Movers. The system is controlled locally using a keyboard and screen located on the front door.
1-8
Internal Dedicated Symmetrix This is similar to a Symmetrix 33xx or 8xxx ICDA system. It is built into the upper half of the cabinet, and is controlled locally with a Service Processor, mounted inside the rear door.
The two subsystems share the power and backup systems, and are interconnected for data transfer.
SCSI Adapter Cards Disk Adapters ICDA Comm Board Fibre Cards Dedicated Symmetrix Disk Drives
ICDA Power Supplies ICDA Modem Connection Data Mover_4 Data Mover_5
Data Mover_3
Control Station_0
SYM MODEM
EMUX Board
CS1
Figure 1-2
1-9
1
Multi-Cabinet Enclosure
The Celerra cabinet has up to 16 positions, or slots, numbered 0 through 15. The bottom row of slots contain one Control Station in slot 0 and an optional Control Station in slot 1, and up to 14 servers, or Data Movers, in the upper-seven rows (slot 2 through slot 15). Figure 1-3 shows this cabinet.
Slot Numbers and Default Control Station/Data Mover Names Slot #14 Server_14 Slot #12 Server_12 Slot #10 Server_10 Slot #8 Server_8 Slot #6 Server_6 Slot #4 Server_4 Slot #2 Server_2 Slot #0 CS_0 Slot #15 Server_15 Slot #13 Server_13 Slot #11 Server_11 Slot #9 Server_9 Slot #7 Server_7 Slot #5 Server_5 Slot #3 Server_3 Slot #1 CS_1 CS0 LED CS1 LED Bottom View of LCD
Modem Connections Comm Board Power Supplies PS1 PS3 PS5 PS2 PS4 PS6
CFS-000206
Figure 1-3
1-10
Current Celerra File Server cabinets have the monitor and keyboard inside the front door. Older models have the monitor and keyboard outside the front door. Depending on the model, the keyboard either drops down or pulls out. The flat panel display provides the monitor for interactions with the Celerra File Server local management interface. You use the local management interface to enter Celerra File Server commands. You can enter commands to the local interface using the keyboard and monitor on the front of the Celerra File Server cabinet. Remote users can access the Celerra over telnet, or by invoking the Celerra File Server Manager from a compatible Web browser. The flat panel display supports only the Celerra command line interface.
You use the keyboard and built-in trackball to enter commands to the local Celerra File Server interface. The Celerra File Server modular hardware design and autonomous hardware architecture provide maximum data availability. To ensure high availability, each Data Mover within the Celerra File Server acts as a fully autonomous file server and, except during Data Mover configuration, operates independently from the Control Station. This ensures that Data Mover operations are not interrupted in the unlikely event of a Control Station failure. Data Movers perform server functions by mounting and exporting file systems and by responding to user requests for data access. Data Movers connect to the external network through Gigabit Ethernet or Fast Ethernet.
FDDI and ATM are supported in earlier versions of the Celerra File Server. For information about FDDI and ATM, refer to the Celerra File Server Network Guide.
The Data Movers and the Control Station connect to the storage system using either Ultra FWD SCSI cards or Fibre Channel adapters (FA). You can only connect to a FC4700-2 storage system using Fibre Channel adapters. When connecting the Data Movers to your storage system, the specific number of connections depends on the attached storage system model.
1-11
1
The Celerra File Server supports the following industry standards:
x x x
NFS v2 and v3 over TCP and UDP CIFS over TCP FTP over TCP, providing utilities for transferring files among heterogeneous systems
To manage the Celerra File Server, you can use either the command line interface or the Web-based Celerra File Server Manager interface. The Celerra File Server also supports:
x
Celerra Monitor software for monitoring the performance of a Symmetrix attached to a Celerra File Server and the performance of any Data Movers mounted in the Celerra cabinet SNMP MIB-II for monitoring server operations and for integration with third-party network management software Time service, using NTP on individual Data Movers and on the Control Station
Figure 1-4 shows Celerra File Server hardware operations with SCSI connections between the Data Movers and the Symmetrix system and Figure 1-5 shows the Celerra File Server hardware operations with Fibre Channel connections between the Data Movers and the FC4700-2 storage system.
1-12
Memory Disk Disk Adapter Director Channel SCSI Director Adapter Ultra SCSI Connectors Disk Disk Adapter Director Channel SCSI Director Adapter
Network
CFS-000197
Figure 1-4
1-13
FC4700-2
Gigabit Ethernet, Fast Ethernet Storage Processor B Link Control Card Link Control Card
Network
CFS-000200
Figure 1-5
1-14
Hardware Components
The Celerra File Server cabinet contains the major Celerra hardware components. It contains from one to four backplanes, each with four hardware slots. In the lower backplane:
x x
The lower-left slot is reserved for the Control Station. The lower-right slot is available for a standby Control Station.
Depending on your configuration, the remaining backplane slots contain individual Data Movers, each consisting of:
x x x x
Pentium based motherboard PCI bus Network cards Fibre Channel adapters or SSCI cards
The lower section of the cabinet contains redundant power supplies, battery backup, and dual power cables, as well as the CD-ROM drive for loading software, the communications boards, and a multiplexer. The Celerra File Server contains the following core hardware components:
x
Primary Control Station and, optionally, a secondary Control Station 2 to 14 Data Movers Power supplies Modem connections Communications board Console multiplexer
x x x x x
Control Station
The Control Station provides the installation, configuration, and management features of the Celerra File Server. Control Station features enable you to install and upgrade software, add hardware components, configure network interfaces, allocate volumes, create and map NFS and CIFS file systems, and manage and monitor individual Data Movers.
Hardware Components
1-15
1
To provide redundancy in the unlikely event of a Control Station failure, the Celerra File Server can be configured to include a secondary Control Station. The primary Control Station is installed in slot 0 in the Celerra cabinet, and the secondary Control Station is installed in slot 1.
Some older Celerra File Server cabinets do not support a second Control Station. Contact your EMC representative to determine whether you need a hardware upgrade.
Figure 1-6 shows a detailed view of the front panel of a Control Station.
CD-ROM Connection (to MUX Board) Dual Floppy SCSI Drive (HBA) External NIC Fast Ethernet(FE) Internal Ethernet (10 Base2)
1 Mouse / Keyboard
Figure 1-6
1-16
The Control Station provides eight component positions (slots), configured as shown in Table 1-3.
Table 1-3
Control Station Slots Slot numbers 1 2 3 Function Connects the external CD-ROM to the MUX board Connects to the floppy drive; used to install Control Station software Contains the dual-port Ultra FWD SCSI or the Fibre Channel HBA connection to the Symmetrix. Note: For the FC4700-2, use only the Fibre Channel HBA connection. Contains the NIC that connects the Control Station to your network Unused Connects the video board that is used by the monitor on the front of the Celerra File Server cabinet Contain dual, internal Ethernet NICs that connect, through the backplane, to each Data Mover
4 5 6 7, 8
IP Addresses
The IP addresses of the Ethernet NICs used for Celerra File Server internal communications are automatically set during installation. You provide an IP address for the external NIC to the EMC Customer Engineer during the configuration process.
Data Movers
Each Data Mover provides the functions of a fully autonomous file server. Data Movers mount and export file systems stored on the storage system and respond to user requests for data access. Data Movers connect to the external network through Fast Ethernet or Gigabit Ethernet.
Earlier models of the Data Mover used FDDI and ATM network protocols.
You manage the Data Movers using the Control Station by entering commands (through the command line interface or through the Celerra File Server Manager) that refer to individual Data Movers by name. User access to data is independent of the Control Station, however, and is not interrupted in the unlikely event of a Control Station failure.
Hardware Components
1-17
1
By default, the name of a Data Mover consists of the prefix server_ added to the number of the slot in which the Data Mover resides. For example, the default name of the server in slot 2 is server_2. You can change the name to whatever you wish and use that name when issuing commands to that Data Mover. Data Movers do not provide remote login capability. The Celerra File Server supports a minimum of 2 and a maximum of 14 Data Movers. Data Movers are installed in slot 2 through slot 15 in the Celerra File Server cabinet. Refer to Figure 1-3 to see how Data Movers are mounted in the Celerra cabinet. Figure 1-7 shows a detailed front view of a typical Data Mover configuration. Other Data Movers will have varying external configurations.
TX
RX TX TX
TX
RX
RX
RX
CFS-000168
Compared to earlier Data Movers, the internal network connections have moved from the front of the Data Mover to connectors located on the back of the Data Mover. The VGA interface is now built into the motherboard and is located next to the keyboard/mouse connectors.
1-18
Each Data Mover provides six component positions, configured as shown in Table 1-4.
Table 1-4
Data Mover Slots Slot Numbers 1 2 3 4 5 and 6 Contain Fibre Channel Adapter that connects the Data Mover to the storage system Used for Fibre Channel connections to the storage system or to Tape Library Units Fast Ethernet NIC that connects the Data Mover to the external network Fast Ethernet NIC available for additional configurations Gigabit Ethernet NICs available for additional configurations
IP Addresses
The IP addresses of the Ethernet NICs used for Celerra internal communications are set during installation. You provide an IP address for each NIC in each Data Mover when you configure your external network addresses. For a multiport NIC, you provide individual IP addresses for each port in the NIC.
You can also configure an Ethernet channel, combining several ports into a single trunk with a single IP address. For a description of Ethernet channels, refer to Celerra File Server Network Guide.
Power Supplies
The Celerra File Server contains up to six 220 VAC 750 W power supplies in the lower front of the cabinet. These provide N+1 capability to the Celerra File Server. The Electrical Power Output (EPO) box, visible from the back of the cabinet, contains up to six circuit breakers, one for each power supply. Pressing the red button on the back of the Celerra File Server cabinet automatically trips the circuit breakers. Main and auxiliary power provide the main and secondary power sources for the system.
If you increase the number of Data Movers to greater than 10 and have only four power supplies, you must add a fifth power supply.
Hardware Components
1-19
1
CD-ROM Drive
The CD-ROM drive is located above the power supplies. You use the CD-ROM drive to install and upgrade Celerra File Server software. Modem connections provide the network access for the CallHome and Call-in utilities. Your customer engineer configures CallHome and Call-in variables during the installation of the Celerra File Server. CallHome utilities enable the Celerra File Server to automatically notify the EMC Customer Support Center if problems occur on the system. Call-in utilities allow EMC Customer Engineers and technical experts to log in to the Celerra File Server remotely to diagnose and repair problems. It is sometimes necessary to change site information. For example, you may want to change a telephone number or modify the name of an administrator. To modify CallHome or Call-in variables, contact your EMC service representative.
Modem Connections
The COMM board monitors environmental conditions and adds messages to the system logs. The Console Multiplexer lets you communicate to the Control Station from the front panel display and keyboard. It also connects the CD-ROM to the Control Station.
1-20
The Celerra File Server uses Ultra Fast Wide Differential (UFWD) Small Computer Systems Interface (SCSI). Each SCSI interface card has dual ports that are used to connect to the Symmetrix or tape library unit (TLU). While Fibre Channel is the preferred physical interface between Control Stations, Data Movers, and the storage system, SCSI is used on many Data Movers and Control Stations that do not have Fibre Channel capabilities. When Fibre Channel and SCSI adapters are found on a Data Mover, the SCSI interface is to be used only for the connection to the TLU. Currently, upgrades for both Control Stations and Data Movers are available through EMC Customer Service Representatives.
Only Fibre Channel connectivity is supported for the FC4700-2 storage system.
Fibre Channel
Fibre Channel is a serial data transfer interface that operates over optical fiber (or copper wire) at data rates up to 100 MB/s (theoretical limit). Networking and I/O protocols (such as SCSI commands) are encapsulated and transported within Fibre Channel frames, which allows high-speed transfer of multiple protocols over the same physical interface. The Celerra File Server supports switched fabric topology and supports ANSI Fiber Channel Class 3 service, a connectionless service similar to packet-switched systems, such as Ethernet, in that the path between two devices is not reserved or dedicated.
1-21
1
Port Types Fibre Channel standards use the term node to describe any device connected to one or more other devices over a Fibre Channel interface. Each node has at least one port that connects to other ports in other nodes. There are seven Fibre Channel port types. A switched fabric without arbitrated loop connections, as supported by the Celerra File Server, uses only the types listed in Table 1-5.
Fibre Channel Port Types Use Fibre Channel-equipped Data Movers connect to the switched fabric using N_Ports. The switches that constitute the switched fabric connect to the Data Movers using F_Ports The switches that constitute the switched fabric connect to the each other using E_Ports This port, on connection, automatically configures as an F_Port or E_Port.
Table 1-5
Description A port on a node outside the fabric. Once connected, the port is part of the fabric, but it remains an N_Port. A port on a switching device that connects to an N_Port and brings that connection (internally within the switch) to the fabric. A port on a switching device that connects to another E_Port on the same or a different switching device. A generic port on the switching device that can operate as an E_Port or an F_Port.
Fibre Channel is supported only on Data Movers and Control Stations with Fibre Channel adapters. Data Movers and Control Stations with Fibre Channel enabled are configured with the following Fibre Channel adapter:
Fibre Channel Adapter Specifications Requirement Adapter type Ports per adapter Cable requirements Data Mover Emulex LP9002DC/L Two, full-duplex Multimode cable with SC connectors Control Station Qlogic QLA-2212 Two, full-duplex Multimode cable with SC connectors
Table 1-6
Topologies
A Fibre Channel switched environment consists of a physical topology and a logical topology. The physical topology describes the physical connections among devices. The logical topology describes the logical paths established between the Data Mover device names and their associated storage system ports and volumes.
1-22
Celerra logical topologies in the switched environment can generally be described in terms of capacity (fan-out) and consolidation (fan-in). Capacity Topology (Fan-Out) The capacity topology allows a single adapter in a Celerra Data Mover to access multiple storage system devices. A capacity topology, as used in the Celerra File Server, is described by the fan-out rate. In a case with a single Data Mover adapter and three storage systems, the Data Mover N_port connects to the switch F_port. In turn, each storage system Fibre Channel Adapter N_port connects to another switch F_port. Figure 1-8 depicts a simplified fan-out topology.
This Fibre Channel Adapter port is sometimes called the FA port for the Symmetrix and the Storage Processor port for the FC4700-2.
Celerra
Data Mover Data Mover
CFS-000212
Figure 1-8
Consolidation Topology (Fan-In) The consolidation topology, as used in the Celerra File Server, is recommended when one or more Data Movers must be connected to
1-23
1
a high-capacity storage system, expanding the required number of server connections. A consolidation topology is described by the fan-in ratio. A 32-port switch can be configured with a physical topology of 24 Data Mover links and 4 storage system links, organized into a logical topology of four 6:1 fan-ins. The 6 Data Mover adapter ports that consolidate into each storage system port have shared access to the port. In the simplified scenario in Figure 1-9, a single Fibre Channel Adapter port is shared by 6 Data Mover connections; a 6:1 logical topology is implied.
Class 3 Fabric
Storage System
Figure 1-9
Fan-In Topology For most environments, the recommended fan-in ratio is 6:1.
About Zoning You can configure the switched fabric into zones to limit each Data Mover to a specific set of storage system front-end addresses (for example, when you have a large number of available volumes). Zoning allows you to group devices by such characteristics as function and location. Devices can be assigned to one or more zones. Devices in a zone can see all addresses in the zone, but none in other zones.
1-24
For the Celerra File Server, a zone should encompass both ports on a Data Mover HBA and two Fibre Channel ports in the storage system on different adapters. Note that each FA can be a member of multiple zones, but that only 1 HBA can be in a zone. Consult your switch vendor s documentation for details on how to configure zones.
Using one or more Fibre Channel switches, you can connect the Celerra File Server and storage system across a Class 3 switched fabric. In this configuration, one port on each Data Mover is connected to one switch, while the other port is connected to the other switch. In turn, each port on the Symmetrix FA or each port on a FC4700-2 Storage Processor is also connected to a different switch. These redundant connections provide the following performance advantages:
x
Under normal conditions, each port automatically performs load-balancing. The distributed connections to different switches provide both physical redundancy and circuit redundancy by providing alternative connections through the switched fabric in the event of Data Mover port, switch, or Fibre Channel Adapter port failure.
In addition, if the switched fabric is properly configured, each switch can fail over to another in the fabric. In the event of a single switch failure, if the Data Mover and the storage system each have active connections to the fabric, Fibre Channel connectivity between the two devices is maintained. Supported Switches For the list of Fibre Channel switches that the Celerra File Server supports, please contact your EMC Customer Service representative. Refer to the vendor documentation for particular switches for more information on configuration procedures and capabilities. For a comprehensive tutorial on Fibre Channel technology, visit the Fiber Channel Industry Associations Web site at http://www.fibrechannel.com/technology.
1-25
Software Components
The Celerra File Server software components include:
x
Control Station software, which provides a controlling subsystem and management interface Data Access in Real Time (DART) software, which provides a server operating system optimized for I/O
Control Station software provides the controlling subsystem of the Celerra File Server as well as the management interface to all file server components. The Control Station runs an EMC derivative of Red Hat Linux as the operating system on the Control Station. You use Control Station software to install, manage, and configure the Data Movers, monitor the environmental conditions and performance of all components, and implement the CallHome and dial-in support features. As part of the installation process, EMC installs Celerra File Server NAS software on the Control Station. NAS software lets you monitor and manage the Data Movers through the Control Station.
A second Control Station can be installed to provide redundancy. Older Celerra cabinets, however, may not support the use of dual Control Stations. Contact your EMC representative to determine whether you need an upgrade to support dual Control Stations.
DART software provides a real-time, multi-threaded operating system, optimized for network file access and input/output (I/O) operations. DART software is installed by the Control Station on each Data Mover. Each Data Mover functions as an individual file server and provides user access to the file systems that you create on Data Movers and export for use by clients.
1-26
The Celerra File Server provides support for the following protocols:
x
Network File Systems (NFS) provides distributed file services for transparent file sharing in network environments. The NFS protocol is typically used by native UNIX clients, as well as by network clients that include NFS capabilities. Common Internet File System (CIFS) extends the Microsoft SMB file-sharing protocol. It allows users to share file systems over the Internet and Intranet. The CIFS protocol is used by Windows and Windows NT machines, and is also available on some UNIX and VMSTM systems. Celerra HighRoad adds a thin, lightweight File Mapping Protocol (FMP) for file layout information and shared conflicts management between the hosts and Celerra. Hosts use the file layout information to read and write file data directly from and to the storage system. One benefit is that data access is at channel speed rather than network speed.
Celerra High Road is supported only on a Celerra File Server attached to a Symmetrix.
Figure 1-10 shows how NFS and CIFS software is organized within a Data Mover. Both CIFS and NFS interfaces to UxFS, a log-based UNIX file system. Data is stored in UxFS format on the disk and can be made available to CIFS and NFS users.
1-27
UNIX Workstation
Network drivers and protocols CIFS Data Mover FMP Multi-Protocol Support Layer File/System Storage Layer NFS
Storage
CFS-000204
Figure 1-10
NFS and CIFS Software The File Mapping Protocol (FMP) is supported only on a Celerra File Server attached to a Symmetrix.
1-28
The Celerra File Server Manager interface enables you to perform most Celerra File Server operations, including configuring and managing your Celerra File Server. You must have an administrator or operator password to log in to the Celerra File Server Manager. The Celerra File Server Manager provides a simpler way to enter commands and parameters, especially if you are not familiar with the Linux command style and syntax. The Celerra File Server Manager allows you to select the function you want, and then enters the appropriate Celerra File Server command for you.
Most Celerra File Server commands can be accessed through the Celerra File Server Manager, but some are available only through the command line interface. Refer to Celerra File Server Celerra Monitor and Celerra File Server Manager Technical Note for detailed information about using the Celerra File Server Manager.
To use the Celerra File Server Manager, your system must meet the requirements shown in Table 1-7.
1-29
1
Table 1-7
Requirements for Celerra File Server Manager Workstation UNIX workstation running with minimal options displayed PC running Windows 95, Windows 98, Windows NT, or Windows 2000 17-inch or greater is preferred, with support for at least 256 colors Minimum recommended display resolution is 640 x 480 Preferred resolutions are 1152 x 900 (UNIX) or 1024 x 768 (PC) One of the following Internet browsers: Netscape Navigator 4.5 or higher Internet Explorer 5.0 or higher In addition: JavaScript (1.2 or higher) must be enabled (the default) Cache (disk and memory) must be turned off
Monitor
Browser
If your system does not meet the minimum resolution requirements, the browser functions at suboptimal capacity. You can increase the area of the browser window available to Celerra File Server Manager by hiding some of the browser toolbars. The documentation for this interface is contained in and accessible through on-line help.
Celerra Monitor
Celerra Monitor is an application that lets you closely monitor specific performance data about the Data Movers in the Celerra cabinet and the attached Symmetrix system that provides the data storage capability of the Celerra File Server. You select the object (Symmetrix system or individual Data Mover) you want to monitor from the top-level window of the Celerra Monitor. You can use Celerra Monitor to perform the following tasks for the Symmetrix system and the Data Movers:
x x x x x x x
Receive online alerts of events posted to the system log View performance View configuration View statistics View logs View summaries of past configurations Control access and polling of monitored data
1-30
Celerra Monitor is a Java client/server application that consists of a Java server (poller) that runs on the Control Station and a Java applet (or in the case of Windows, an application) that runs in your browser. Celerra Monitor requires the following minimum configuration:
x x x x
Direct (rather than proxy) connection to the Internet Netscape Navigator 4.5 or later or Internet Explorer 5.0 or later Java Virtual Machine (JVM) 5.0 or later Symmetrix/4.8 with 5265 microcode or later configuration
Celerra Monitor is only supported on a Celerra File Server attached to a Symmetrix. To monitor the performance of the FC4700-2 storage system, use Navisphere Manager.
The workstation on which the Celerra Monitor software is installed must have one of the following minimum configurations.
Minimum Configuration for the Celerra Monitor Operating System Solaris (SunOS 5.5.1) Windows NT 4.0 Windows 95 Windows 98 Windows 2000 Minimum Requirements 300 MHz processor, 64 MB RAM, 256 MB swap space, 16K color graphics, Netscape Navigator 4.5 or greater 200 MHz processor, 64 MB RAM, 128 MB virtual memory, 16K color graphics, 5 MB disk space for Celerra Monitor, 6 MB disk space for JVM 5.0, Netscape Navigator 4.5 or greater, or Internet Explorer 5.0 or greater
Refer to the Celerra File Server Manager and Celerra Monitor Technical Note for detailed information about installing and using Celerra Monitor.
1-31
Configuring Celerra
Once EMCs Customer Service Engineer has completed the initial installation of your Celerra File Server hardware and software, you can begin to customize the Celerra to support the users in your organization. You must make a number of decisions during the configuration process, especially in the areas of volume configuration, network connectivity, and NFS/CIFS interoperability. These decisions must be based on user expectations and your specific network environment.
System Tasks
Using either interface as System Administrator, you perform such tasks as:
x
System configuration and verification (drives, networks, accounts). Volume configuration and file system creation. File system management (creating mount points, mounting/unmounting, extending, deleting, renaming, exporting/unexporting, checking capacity, creating snapshots, archiving). System monitoring (Data Mover free space, routing table, ARP table, activity logs).
x x
Table 1-9 describes the basic steps involved in configuring the Celerra File Server. A brief description of each task, along with specific chapter and page references, is provided.
1-32
Also refer to Figure 1-11 for a flow chart of the configuration process.
Table 1-9
Task 1
Description Set up network interfaces Set up the network interfaces that enable users to connect to the Data Movers and retrieve files. Create volumes Create the volume configuration required to support file systems. Create file systems Create the file systems that contain user files. Create a mount point Create a network access point for each Data Mover. Mount the file system with options Mount the file system, specifying the options appropriate for your application. Export file system Make the network access point available for NFS and CIFS users. Optionally, configure Data Movers to use CIFS Configure the Data Movers to become members of a Windows domain and establish security policies. Configure network services Configure NIS, DNS, and NTP. Configure failover Configure a Data Mover as a standby for a primary Data Mover.
5-2
5-15
5-16
Mounting a File System Celerra File Server Windows Environment Configuration Guide Exporting a Path Celerra File Server Windows Environment Configuration Guide Celerra File Server Windows Environment Configuration Guide
6-6
6-8
Configuring DNS and NIS Configuring Time Services Configuring Standby Data Movers
Configuring Celerra
1-33
Configure protocol
CIFS
NFS
Create volumes
Use standby?
N
Configure standby
Figure 1-11
1-34
Once you have completed the basic Celerra configuration, you can further customize the operation of your system. For example, you can configure the Celerra for disaster recovery. For a description of additional features, refer to the Celerra File Server User Information CD-ROM, included with this documentation set.
Configuring Celerra
1-35
Supported Platforms
The Celerra File Server supports users on platforms that have NFS and CIFS capability. For a list of supported platforms, review the Celerra File Server Interoperability matrix:
http://www.emc.com/horizontal/interoperability/matrices/Celerra_Interoperability _Matrix.pdf
1-36
2
Planning for a Celerra File Server
Read this chapter as you plan your Celerra File Server configuration with EMC network and storage specialists. You can also consult this chapter if you need to upgrade your system at a later date. This chapter describes the Celerra File Server network and storage requirements.
x x x x
Installation Prerequisites...................................................................2-2 Determining Storage Needs .............................................................2-3 Determining the Number of Data Movers .....................................2-4 Organizing Data and Data Movers .................................................2-5
2-1
Installation Prerequisites
Before installing the Celerra File Server, you must determine the following site requirements:
x
The amount of storage you need today, as well as the amount of additional storage you anticipate needing to expand and meet new business requirements The number of Data Movers you require How you will organize volumes and file systems across Data Movers How your users access data The topology of your network The NFS and CIFS environments
x x
x x x
2-2
The amount of storage available depends on the storage system model, the number of disk drives configured, and the protection scheme that you choose. Table 2-1 and Table 2-2 summarizes the ways the available storage protection schemes allocate storage.
Symmetrix Storage Schemes and Storage Usage Storage Scheme Mirroring (RAID-1) Storage Usage 50% available for storage 50% used for mirroring Supports Drive Size 9 GB 18 GB 36 GB 73 GB 181 GB
Table 2-1
9 GB 18 GB 36 GB
Table 2-2
FC4700-2 Storage Schemes and Storage Usage Storage Scheme Striping (RAID-5) Storage Usage 88% available for storage 12% used for backup Supports Drive Size 73 GB
2-3
Data Mover performance requirements Data availability requirements Network capacity and topology
To help you calculate the number of Data Movers you need, EMC provides on-site specialists who work closely with your technical staff to provide customized solutions based on a thorough understanding of your network and storage requirements. When determining the number of Data Movers, consider your data availability requirements and include any necessary standby Data Movers in your calculations. A standby Data Mover is a spare Data Mover that is configured to provide failover protection in the unlikely event of a Data Mover failure. Networks that require high data availability usually include one or more standby Data Movers. You can configure a standby Data Mover to substitute for a single Data Mover or for several Data Movers. Standby Data Movers:
x
Are not configured to export data until a primary Data Mover fails Do not count in the number of Data Movers you need to access data
Refer to Data Mover Availability on page 7-2 for information about Data Mover failover.
2-4
How to map volumes to file systems. File systems are accessed through specific Data Movers. The type of data to which each Data Mover provides access.
Except for the system volumes, all volumes on the storage system unit are initially available for data storage. The Celerra File Server provides volume management features that help you to divide, combine, and group volumes to meet your configuration needs. Refer to Creating Volume Configurations on page 5-2 for additional information. During configuration, volumes are sized and file systems are created and assigned to individual Data Movers. Data Movers provide I/O access to the file systems that they control and to data that resides within the files. Figure 2-1 shows how sized volumes are assigned to Data Movers. In this example, the storage system unit is configured with seven 36 GB disk drives. These are combined into three 72 GB volumes, mapped to file systems, and assigned to Data Mover 2, Data Mover 3, and Data Mover 4. The remaining 36 GB of storage is mapped to a file system and assigned to Data Mover 5.
72 GB System Volumes 36 GB 36 GB 36 GB 36 GB 36 GB 36 GB 36 GB 72 GB 72 GB
File System
File System
File System
File System
Control Station
Data Mover #2
Data Mover #3
Data Mover #4
Data Mover #5
CFS-000169
Figure 2-1
2-5
2
Typical Configuration
Figure 2-2 shows how volumes, file systems, and Data Movers can be organized to support users within a business environment. In this illustration:
x
Data Movers 2 and 3 support the CAD department. This department typically generates and stores large amounts of data. Data Mover 4 supports the CAE department, where data requirements are comparatively smaller. Data Mover 5 supports administration. This Data Mover is configured with the least amount of storage space.
72 GB System Volumes 36 GB
72 GB 36 GB
72 GB 36 GB 36 GB
Storage System
36 GB
36 GB
36 GB
File System
File System
File System
File System
Data Mover #2
Data Mover #3
Data Mover #4
Data Mover #5
Ethernet Switch
Ethernet Switch
CAD Department
CAE Department
Administration
CFS-000199
Figure 2-2
Data Mover Organization Data Movers own the file systems and the volumes that you assign to them. You can assign multiple Data Movers to a volume and file system only when the access to the volume and file system is read only.
2-6
3
Power Sequences
This chapter provides instructions on how to power down the Celerra File Server in a planned format and under emergency conditions. It also explains how to apply power to the Celerra cabinet.
x x x x x
Powering Up the Celerra Cabinet....................................................3-2 The Command Line Interface...........................................................3-4 Logging In ...........................................................................................3-5 Planned Power Down........................................................................3-6 Emergency Shutdown .......................................................................3-8
Power Sequences
3-1
Power Sequences
3. 4.
POWER
CFS_000131
5.
Set the black AC In power switch to the ON (|) position. The Power indicator should be green.
3-2
Power Sequences
Action Turn on the circuit breakers (CB). If you have attached storage systems, set all circuit breakers in use to the ON (up) position. If you have a single enclosure cabinet, turn on circuit breakers 5 and 6. Result: This turns on the power to the Symmetrix. Verify that the Symmetrix is online. Refer to the Symmetrix documentation. Set the rest of the circuit breakers to the ON position. Result: This turns on the power to the Celerra. The fans in the cooling module at the top of the unit are controlled by the circuit breakers that turn on the power to the Celerra.
CFS_000123
7.
The Control Station boot sequence requires approximately five minutes to complete.
3-3
Power Sequences
RS232
Command-Line Parameters
EMC recommends that you limit the length of volume names, file system names, and so forth, as well as limit the use of multiple commands on the same line. Details on commands can be found in the Celerra File Server Command Reference Manual.
3-4
Power Sequences
Logging In
This section describes how to access the Celerra File Server from the command line interface. Local Access For local access to the command line interface:
x
Log in as nasadmin.
Remote Access
For remote access to the command line interface: 1. Enter rlogin or telnet followed by the IP Address of the Control Station. 2. Log in as nasadmin.
RS232
Logging In
3-5
Power Sequences
CAUTION Never power down the system using the red EPO switch, unless in an emergency. In addition, you must perform this procedure from the Celerra cabinet keyboard. Do not use a telnet session. To power down the system:
Stage 1. 2. 3. 4. Process If you have a backup Control Station, halt the secondary Control Station. See Halting the Control Station(s) on page 10-7. Halt the Data Movers. Halt the primary Control Station. Turn off the power.
3.
3-6
Power Sequences
Step 1. 2.
Action Log in as root. Turn off the Box Monitor, to prevent a call home event, by typing: /etc/rc.d/rc3.d/S95nas stop Enter the following command: /sbin/init 0 Result: The following displays: The system is halted. Power down.
3.
2.
3. 4.
3-7
Power Sequences
Emergency Shutdown
Use the following shutdown procedure only during emergency situations. WARNING Unless faced with property damage or personal hazards, never power down a Celerra File Server by using the red EPO switch or black circuit breaker switch on either the Symmetrix or the Celerra cabinet. To power down the Celerra cabinet in the case of an emergency: At the rear door of the Celerra cabinet, turn the EPO (red) switch to the OFF position.
This action immediately removes all power from the Data Movers, the Control Station, and the Celerra cabinet. No other action is required in case of emergency.
Step 1. 2. 3. 4. 5. 6.
Action If needed, re-apply power at the AC wall breakers On the rear door of the Celerra cabinet, set the red EPO switch to the ON position. Set all circuit breakers in use on the EPO Box to the ON (up) position. The system starts to power up. Switch the battery breaker to the ON position. On the rear door of the Celerra cabinet, set the black AC IN breaker to the ON position. Verify that all fans in the cooling module on top of the unit are operating.
3-8
4
Configuring Celerra Network Services
This chapter describes how to configure an IP network interface and network services on a Celerra File Server. Use the information in this chapter for:
x x x
Creating an IP Interface.....................................................................4-2 Configuring DNS and NIS................................................................4-7 Configuring Time Services ...............................................................4-9
4-1
Creating an IP Interface
The Celerra File Server acts as a link between users and stored data. To reach the data through a network connection, you must configure the Data Mover IP interfaces. To create a new Data Mover IP network interface you must configure an IP address, a subnet mask, and a broadcast address for the specified interface and device. Table 4-1 lists the various network cards used in the Celerra File Server with their respective protocols, types, and mnemonics.
Table 4-1
NICs Used in Celerra Type 4-port 10/100 BaseTa single port single and dual attach OC-3 Interface mnemonic ana ace fpa fa2 (OC-3)
a. Single-port Ethernet NICs were supported in earlier versions of the Celerra File Server. b. FDDI and ATM protocols are supported in earlier versions of the Celerra File Server. TIP: Configuration for ATM NICs does not follow the same procedure as Ethernet and FDDI. Refer to Creating ATM Interfaces in the Celerra File Server Network Guide.
Initially, the system assigns a default name for each device. This pre-configured device name consists of the interface mnemonic, plus a number that is sequentially appended, starting with 0 (zero). For example, the first Ethernet port is ana0. For a device with four Ethernet ports, the remaining ports would be named ana1, ana2, and ana3 (respectively). To view the device names for all adapters available for a specific Data Mover, type:
$ server_sysconfig movername -pci
Command
$ server_sysconfig server_2 -pci server_2 : PCI DEVICES: Slot: 1 Emulex LP8000 Fibre Channel Controller 1: fcp-0 IRQ: 14 addr: 10000000c9234f00 0: fcp-1 IRQ: 12 addr: 0000000000000000
4-2
Slot: 2 Alteon Tigon-2 Gigabit Ethernet Controller 0: ace0 IRQ: 12 txflowctl=disable Slot: 3 Adaptec AHA-3944AUWD Multiple channel SCSI Controller 0: scsi-0 IRQ: 12 0: scsi-1 IRQ: 9 Slot: 4 Adaptec ANA-6944 Multiple Fast Ethernet Controller 3: ana0 IRQ: 9 speed=auto duplex=auto rcvbufs=auto txbufs=auto 2: ana1 IRQ: 9 speed=auto duplex=auto rcvbufs=auto txbufs=auto 1: ana2 IRQ: 9 speed=auto duplex=auto rcvbufs=auto txbufs=auto 0: ana3 IRQ: 9 speed=auto duplex=auto rcvbufs=auto txbufs=auto Slot: 5 Alteon Tigon-2 Gigabit Ethernet Controller 0: ace1 IRQ: 14 txflowctl=disable
Table 4-2
server_sysconfig Sample Breakout PCI Slot Number & Port Slot: 1 Port 1 Slot: 1 Port 0 Slot: 2 Port 0 Slot: 3 Port 0 Slot: 3 Port 0 Slot: 4 Port 3 Slot: 4 Port 2 Board Type Fibre Channel Controller Fibre Channel Controller Gigabit Ethernet Card Multi-Channel SCSI Controller Multi-Channel SCSI Controller Fast Ethernet Controller Fast Ethernet Controller Device Name fcp-0 fcp-1 ace0 scsi-0 scsi-1 ana0 acna1
Creating an IP Interface
4-3
4
Table 4-2
server_sysconfig Sample Breakout (continued) PCI Slot Number & Port Slot: 4 Port 1 Slot 4 Port 0 Slot: 5 Port 0 Board Type Fast Ethernet Controller Fast Ethernet Controller Gigabit Ethernet Controller Device Name ana2 ana3 ace1
The output listing that is returned after issuing the server_sysconfig -pci command contains all PCIbased adapters, including Fiber Channel and SCSI controller cards. To configure an IP interface for one of the physical ports on a Fast Ethernet card, select the physical port you want to connect to and then use the server_ifconfig command to create the interface. Table 4-3 consolidates sample information needed to create an IP interface on a Fast Ethernet card shown in Table 4-2.
Table 4-3
Sample Parameters for an IP Interface Information Needed Data Mover name: Specific Board Type: Device Name for Slot: 4 Port 0 Name you wish to give the Interface; Sample IP: Sample Netmask: Sample Broadcast Address: Parameter server_2 Fast Ethernet ana3 Marketing-1 192.168.42.88 255.255.255.0 192.168.42.255
The following example uses the information in Table 4-3 to configure (create) an IP interface on a Fast Ethernet card using the server_ifconfig command. The sample ethernet card is physically located in the Data Mover named: server_2.
4-4
$ server_ifconfig movername -c -D device_name -n if_name -p IP ip_addr ip_mask ipbroadcast $ server_ifconfig server_2 -c -D ana3 -n Marketing-1 -p IP 192.168.42.88 255.255.255.0 192.168.42.255 server_2 : done
In the example above, port: 0 of the Fast Ethernet card in slot: 4 of the Data Mover named server_2 (from Table 4-2) is configured for IP. It has a device name of ana3 and an interface name of Marketing-1. Troubleshooting Tip If the following message appears:
server_2: No such device or address
The addresses you have entered are invalid. Make sure that the broadcast address is in the same subnet as the IP address that you entered.
The device name and interface name of every NIC contained in and recognized by your system is listed by the server_ifconfig -a command. If you have not yet configured an interface, it will not appear in the list of network interfaces. To obtain a list of configured interfaces on a Data Mover, type:
$ server_ifconfig movername -a
Command
Sample Output
server_2 : Marketing-1 protocol=IP device=ana3 inet=192.168.42.88 netmask=255.255.255.0 broadcast=192.168.42.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:0:d1:1e:55:72 loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32704, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost el31 protocol=IP device=el31 inet=192.1.2.2 netmask=255.255.255.0 broadcast=192.1.2.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:1:2:c0:7e:ef netname=localhost el30 protocol=IP device=el30
Creating an IP Interface
4-5
4
inet=192.1.1.2 netmask=255.255.255.0 broadcast=192.1.1.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:1:2:c0:7e:af netname=localhost
Troubleshooting Tip
If you experience problems in network connectivity, your network cables may not have been connected properly. Contact EMC Customer Service.
4-6
Note: See Celerra File Server Windows Environment Configuration Guide for DNS combinations with Windows domains.
NIS Authentication
NIS provides authentication for client access to file systems by automatically mapping the IP address and the host name. Once the server is authenticated, access to the file system is permitted. To enable the NIS server for a Data Mover, you must know:
x x
To configure a Data Mover as a DNS or NIS client, you must set the DNS or NIS server configuration for the Data Mover.
You can enter a total of 13 IP addresses; 3 for the DNS and 10 for the NIS server.
Command
Command
4-7
4
Command To delete an NIS entry, type:
$ server_nis movername -delete domainname ip_addr
Command
Command
4-8
Command
To start time synchronization between a Data Mover and an external source, type:
$ server_date movername timesvc start ntp host host host If you indicate Delay under Clock Set, but do not indicate a time, the delay default is 1 hour or 60 minutes. If you do not indicate a Delay, the default is to execute immediately and again at the polling time indicated.
Sample Output
server_3 : done
4-9
Modify an IP interface after it has been created. Create multiple interfaces for a device. Delete an interface after it has been created. Create Gigabit Ethernet Interfaces. Create a Fail Safe Network device. Create Ethernet Channel devices.
To complete the following volume and file system tasks, refer to Chapter 5 and Chapter 6 in this guide and to the Celerra File Server Windows Environment Configuration Guide:
x x x x
Create volume configurations required to support file systems. Create the file systems. Create a network access point (mount point) for each Data Mover. Mount the file system, specifying the options appropriate for your application (NFS or CIFS).
4-10
5
Creating Volumes, File Systems, and Mount Points
This chapter describes how to create volumes, file systems, and mount points.
x x x x x x x
Creating Volume Configurations.....................................................5-2 Slice Volumes ......................................................................................5-3 Stripe Volumes....................................................................................5-5 Meta Volumes .....................................................................................5-9 Business Continuance Volumes .....................................................5-13 Creating a File System .....................................................................5-15 Creating a Mount Point...................................................................5-16
5-1
Volume Types
Table 5-1
The Celerra File Server supports the volume types listed in Table 5-1.
Volume Types Description Represents the basic disk configuration on the storage system. Typically provides a smaller disk section, similar to a UNIX disk partition. Provides an arrangement of volumes, organized into a set of interlaced stripes that improve volume performance. Provides a required concatenation of slice, stripe, or disk volumes. Meta volumes are required for dynamic file system storage. Enables the use of TimeFinder/FS file system copies. Only supported on a Celerra File Server attached to a Symmetrix. For More Information Refer to Configuring a Disk Volume on page 5-2. Refer to Slice Volumes on page 5-3. Refer to Stripe Volumes on page 5-5. Refer to Meta Volumes on page 5-9. Refer to Business Continuance Volumes on page 5-13.
Meta volume
Important: File systems can only be created and stored on meta volumes.
Before beginning volume management, refer to Chapter 2, Planning for a Celerra File Server, for information about assessing and determining your storage requirements, and which configuration would best accommodate your needs.
Volumes are initially configured as disk volumes. Each usable storage system volume appears as a usable disk volume. After the initial installation, you configure disk volumes only when you add disks to the storage system.
5-2
Slice Volumes
A slice volume is a logical, non-overlapping section cut from another volume component. A slice volume is similar to, but more versatile than, a UNIX disk partition. Unlike UNIX disk partitions, you can create an unlimited number of slice volumes.
Slice volumes are automatically created during Celerra File Server installation and are immediately available for configuration. Slice volumes can be configured to any size but are typically used to create smaller, more manageable units of storage. The definition of a more manageable logical volume size depends on your system configuration and the type of data you are storing. Figure 5-1 shows a 2 GB volume on which a 0.5 GB slice is defined.
2.0 GB
CFS_000023
Figure 5-1
Slice Volumes Although slice volumes define your storage capacity, you cannot use them to store file information. You must configure slice volumes as part of a meta volume in order to store file system data on them. Refer to the section on Meta Volumes on page 5-9 for additional information.
Slice volumes are cut out of other volume configurations to make smaller, more manageable volumes. When you create a slice volume, you can indicate an offset that is the distance (in megabytes) from the end of one slice to the start of the next. Unless a value is specified for the offset (the point on the container volume where the slice volume begins), the system places
Slice Volumes
5-3
5
the slice in the first-fit algorithm (default) that is the next available volume space. You must first identify the volume from which the slice volume will be created. The root slice volumes created during installation appear when you list your volume configurations; however, you do not have access privileges to them, and therefore, cannot execute any commands against them. To verify that the volume name selected for the slice volume is not already used, type:
$ nas_slice -list
Command
Sample Output
Troubleshooting Tip
If a volume with sufficient space is not available, an error message appears when you attempt to create the slice volume. To verify that the volume being selected to slice has adequate unused volume, type:
$ nas_volume -size volume
5-4
Stripe Volumes
A stripe volume is a logical arrangement of participating disk, slice, or meta volumes that are organized, as equally as possible, into a set of interlaced stripes. Stripe volumes:
x
Improve system performance by balancing the load across the participating volumes. Achieve greater performance and higher aggregate throughput because all participating volumes can be active concurrently.
Figure 5-2 shows an example of a stripe volume. The stripe is created across three participating volumes of equal size.
30 MB Slice 10 MB Stripe 10 MB Stripe 10 MB Stripe
CFS_000024
30 MB Slice
30 MB Slice
Figure 5-2
Stripe Volumes
Stripe volumes improve performance because, unlike disk, slice, and meta volumes, addressing within a stripe volume is conducted in an interlaced fashion across volumes, rather than sequentially. Data is interlaced within the stripe volume starting with stripe unit 0 on the first participating volume, continuing to stripe unit 1 on the next participating volume, and so on. As necessary, data wraps back to the first participating volume. In a stripe volume, a read request is spread across all component volumes concurrently. This scheme optimizes system efficiency, creates less waiting time, and produces higher productivity than disk, slice, or meta volumes that are read sequentially. Figure 5-3 shows addressing within a stripe volume.
Stripe Volumes
5-5
Stripe Volume
CFS_000025
Figure 5-3
Carefully consider the size of the stripe volume you want. After the stripe volume is created, its size remains fixed. To modify the size of the stripe volume, you must copy the data to another area, remove the existing stripe, and reconfigure the volumes you want. Configure stripes to use the maximum amount of disk space. Figure 5-4 shows maximum utilization. In this case, the size of the participating volumes within the stripe are uniform and are evenly divisible by the size of the stripe. Each participating volume contains the same number of stripes.
30 MB Slice
30 MB Slice
CFS_000024
Figure 5-4
Space is wasted if the volumes are evenly divisible by the stripe size but are unequal in capacity. Figure 5-5 shows this configuration. The extra unit on the larger volume is not included in the configuration and is unavailable for data storage.
5-6
30 MB Slice
40 MB Slice
10 MB Unused
CFS_000026
Figure 5-5
Space is also wasted if the size of the volume is not evenly divisible by the stripe size. Figure 5-6 shows an example of a stripe volume with residual storage space.
230 Blocks
192 Blocks
38 (unused blocks)
CFS_000027
Figure 5-6
Creating a stripe volume allows you to achieve a higher aggregate throughput from a volume set since stripe units contained on volumes in the volume set can be active concurrently. Special consideration should be paid to the sizes of the volumes and stripes used to create a stripe volume so that unused storage is minimized.
Stripe Volumes
5-7
5
Command To create a stripe volume, type:
$ nas_volume -name name -create -Stripe stripe_size volume_name, volume_name
The recommended stripe size is 32768 bytes; the default size is 32K. Stripe depths must be in multiples of 8192 bytes. If you do not select a name for the stripe volume, a default name is assigned. Sample Output
id name acl in_use type stripe_size volume_set disks = = = = = = = = 316 str1 0 false slice 32768 d5,d6,d7 d5,d6,d7
5-8
Meta Volumes
A meta volume is an end-to-end concatenation of one or more disk, slice, stripe, or meta volumes. A meta volume is required to create a file system, because meta volumes provide the expandable storage capacity that may be needed to dynamically expand file systems. A meta volume also provides a way to form a logical volume that is larger than a single disk.
Meta volumes can be created from a disk, stripe, slice, or meta volume. The most common configuration is a meta volume that is created using one or more available stripe volumes. You then create a file system on the meta volume. You can expand a meta volume by adding additional disk, stripe, slice, or meta volumes to it. Once the new volume is combined with the meta volume, it logically becomes part of the meta volume, takes on the attributes of the meta volume, and is able to host your file system data. Figure 5-7 depicts a meta volume configuration, using three disk volumes.
Disk Volume
CFS_000028
Figure 5-7
Meta Volumes
5-9
5
Addressing Within a Meta Volume
All information stored within a meta volume is arranged in addressable logical blocks and is organized in sequential, end-to-end fashion. Figure 5-8 shows meta volume addressing.
Meta Volume
Figure 5-8
The meta volume shown in Figure 5-9 is created from a stripe volume and a slice volume.
CFS_000030
Figure 5-9
5-10
Addressing within the stripe volume starts at logical block 0 and continues sequentially through logical block 8 in an interlaced fashion across the participating volumes. The slice volume is essentially one 2 GB block addressed sequentially. Figure 5-10 shows the sequential flow of addressing in the resulting meta volume.
Meta Volume
0 3 6
1
4
2 5 8
CFS_000031
Figure 5-10
The Celerra File Server provides many ways to expand storage capacity. For example, you can create a stripe volume, convert it to a meta volume, and use it to store file system data. Later, the file system capacity can be expanded by adding additional volumes to the meta volume.
The total capacity of a meta volume equals the sum of all volumes that compose the meta volume.
To create and store a file system, you must first create a meta volume. The size of the meta volume must be at least 1 MB to accommodate a file system.
Meta Volumes
5-11
5
Command To combine volumes into a meta volume, type:
$ nas_volume -name name -create -Meta volume_name,volume_name If you do not enter a meta volume name, a default is assigned.
Sample Output
= = = = = = =
Your system contains a set of slice volumes that can be arranged into various volume configurations. To obtain a list of all of the current volume configurations within your system, type:
$ nas_volume -list
Sample Output
id 2 3 4 5 6
inuse y y y y y
type 4 4 4 4 4
name root_d2 d3 d4 d5 d6
cltype 0 0 0 0 0
5-12
BCVs must have the capacity to accommodate your largest potential file system. Refer to the appropriate storage system documentation for details on how to create BCVs. Figure 5-11 shows the relationship between standard volumes and BCVs.
Standard Volume 18 GB
Figure 5-11
The TimeFinder/FS feature of the Celerra File Server uses BCVs to enable you to create file system copies and dynamically mirror file systems, as follows:
x
The file system copy function enables you to create an exact copy of a file system that you can use as input to a restore operation, for application development, or for testing. The mirror function enables you to create a file system copy in which all changes to the original file system are reflected in the mirrored system.
5-13
5
After a BCV is created, you can use the Celerra File Server Manager or the nas_fs command to create a file system copy.
CAUTION Do not attempt to use Symmetrix TimeFinder tools and utilities with file system copies created by Celerra TimeFinder/FS.
5-14
You can create a file system on only non-root meta volumes that are not in use. A meta volume must be at least 1 MB to accommodate a file system. You can specify the meta volume either by name or by size.
For information about verifying the size of a meta volume, refer to Checking Capacity on page 8-6.
Command
Sample Output
= = = = = = = = = =
014,015,016 d5,d6,d7,d8
Now that you have created a file system, you must create a mount point before you can mount the file system for server access.
5-15
server_3: done
Sample Output
5-16
6
Mounting and Exporting File Systems for the NFS User
This chapter describes how to mount and export a file system for NFS users.
x x x x x x x
User Types...........................................................................................6-2 Understanding NFS ...........................................................................6-3 Typical NFS Configuration ...............................................................6-4 Mounting a File System.....................................................................6-6 Exporting a Path.................................................................................6-8 Providing PC User Access .............................................................. 6-11 Unexporting Paths/Shares .............................................................6-14
6-1
User Types
The Celerra File Server supports NFS and CIFS users. You must understand which type of user you have so you can configure the file system protocol and access for that user. The Celerra File Server lets you configure a file system for use by:
x x x
NFS users only CIFS users only Both CIFS and NFS users
The Celerra File Server also lets you map an NFS and CIFS user to the same username and group ID. This mapping provides the user with seamless access to shared file system data.
For information about configuring the file system protocol for the CIFS environment only or for both an NFS and CIFS environment, refer to the Celerra File Server Windows Environment Configuration Guide.
6-2
Understanding NFS
The Celerra File Server is a multiprotocol machine that supports the NFS environment. NFS is a client/server, distributed file service that implements file sharing in network environments. The NFS protocol enables the Celerra File Server to assume the functions of a Network File System Server. NFS environments typically include:
x x
Native UNIX clients Windows systems configured with third-party applications that provide NFS client services
The Network Information System (NIS) is used on some configurations to manage common files, such as password, group, and host files. The Domain Name Server (DNS) may also be available to resolve hostnames to IP addresses or IP addresses to hostnames for hosts outside the local network.
NFS Environment
The NFS protocol provides distributed file services in client/server environments. The NFS protocol is typically configured for environments that include many native UNIX systems. Systems can also include any non-UNIX machines configured to run NFS software. When you configure a file system for NFS users and associate it with a Data Mover, the Data Mover operates as an NFS server. In an NFS server environment, the file system is mounted on the Data Mover and exported and mounted on clients. Exported file systems are available across the network and can be mounted by remote users.
Authentication validates NFS user passwords. Authentication is accomplished on the remote client during login, using the local authentication login method. Users who are authenticated are permitted network access to the Data Mover. Access to stored files and directories is determined by the Data Mover, which matches the User and Group ID identification attached to the file or directory with the information supplied by the accessing user.
Understanding NFS
6-3
Configuring a file system for NFS users involves: 1. Creating a file system 2. Creating a mount point 3. Mounting the file system on a specified Data Mover 4. Exporting the file system to NFS users 5. Mounting the file system on the users machines Tasks 1 and 2 are discussed in Chapter 5, Creating Volumes, File Systems, and Mount Points. Tasks 3-5 are discussed in this chapter.
Figure 6-1 shows a Data Mover configured for NFS users. The Data Mover:
x x
Connects to the network using an IP address Compares the user request information with the access parameters associated with the file system, directory, or file Permits or denies file access by the Data Mover, based on the result of the match
6-4
Celerra File Server Native UNIX Client File System IP Address Network Data Mover Windows Client Running NFS Client Software
CFS-000181
Figure 6-1
6-5
File locking provides a mechanism for ensuring file integrity when more than one user may access the same file. File locks manage attempts to read, write, or lock a file that is held by another user. In an NFS system, locks can be either:
x x
NFS locking rules are cooperative, so that a client is allowed to access a file locked by another client if it does not use the lock procedure. More than one process can hold a read lock on a particular file, but if one process holds an exclusive lock, no other process can hold any lock on the file until the exclusive lock is removed. A read lock can be changed to an exclusive lock, and vice versa. In NFS, a process can perform advisory locks on a file segment. An advisory lock does not affect read and write access to the file, but it informs other users that the file is already in use.
When performing a mount, you can institute the following options to define the mount: When a file system is mounted read-write (default) on a Data Mover, only that Data Mover is allowed access to the file system. No other Data Mover is allowed read or read-write access to that file system. When a file system is mounted read-only on a Data Mover, clients cannot write to the file system regardless of the export permissions. A file system can be mounted read-only on several Data Movers concurrently, as long as no Data Mover has mounted the file system as read-write.
Read-Only
6-6
Permanent
File systems are mounted permanently by default. The mount is entered into the mount table and the mount and the options defined remain in effect, regardless of a system reboot. If you perform a temporary unmount, in the case of a system reboot, the mount table is activated, and the file system is automatically mounted again. When mounting a file system, you must know which Data Mover contains your mount point. To list available Data Movers, type:
$ nas_server -list
Command
server_3: done
The mount point may not exist on the specified Data Mover.
Troubleshooting Tip #2
You may have tried to execute the same command more than once. Check your list of mounted file systems to verify that the path has already been successfully mounted.
6-7
Exporting a Path
After creating a mount point and mounting a file system, you must export the path to allow NFS users to access the system. Paths are exported from Data Movers using the server_export command. Each time the server_export command is issued, an entry is added to existing entries in an export table. Entries to the table are permanent and are automatically re-exported if the system reboots. You can overwrite existing options in an export entry by including the -ignore option in the command string. This forces the system to overwrite the options in the export table. Whatever options you include in the command line, overwrite the options in the export table associated with this path. To display a list of all exports for one or all Data Movers use the -list option.
For more information on the server_export command and supported options, refer to the server_export on-line man page or to the Celerra File Server Command Reference Manual.
Table 6-1 describes the available NFS options for exporting a path.
NFS Export Options Result Exports the pathname for NFS clients as read-only. If not specified, the pathname is exported read-write. Exports the pathname for NFS clients as read-mostly. Read-mostly means exported read-only to most machines, but read-write to those specified. If not specified, the pathname is exported read-write to all. A client may be hostname, an IP address, a subnet, or a netgroup. See the Caution below regarding the use of netgroups. A subnet is an IP address/netmask. For example, 168.159.50.0/255.255.255.0.
6-8
Table 6-1
NFS Export Options (continued) Result If a request comes from an unknown user, use the UID as the effective user ID. Root users (uid= 0) are always considered unknown by the NFS server, unless they are included in the root option. The default value for this option is the UID of the user nobody. If the user nobody does not exist, then the value 65534 is used. Provides root access only to the root users from a specified hostname, netgroup, subnet, or IP address. The listing must be typed without spaces, and be colon-separated. The default is for no hosts to be granted root access. See Caution below regarding the use of netgroups. Provides mount access to each client listed. A client may be a hostname, an IP address, a subnet or a netgroup. The listing must be typed without spaces, and be colon-separated. Each client in the list is first checked for in the /.etc/hosts database, then in the /.etc/netgroup database, then finally it checks the NIS or DNS server (if enabled). The default value, no access list, allows any machine to mount the given directory. You create the hosts and netgroup files on the Control Station using your preferred method (with an editor, or by copying from another node, and so on), then copy them to the Data Mover using the server_file command. A subnet is an IP address/netmask. For example, 168.159.50.0/255.255.255.0.
root=client[:client]...
access=client[:client]...
The Celerra File Server supports netgroups. If the system does not find the client in /.etc/hosts, or in /.etc/netgroup, then it checks the NIS or DNS server (if enabled). If the client name does not exist in either case, then the system displays an error message such as undefined netgroup. See the Caution that follows regarding the use of netgroups.
CAUTION If a host belongs to more than one netgroup, the behavior of the access rights granted for the host is unpredictable, therefore, a host should not belong to multiple netgroups. Netgroups are defined in the system (/.etc/netgroup file or through NIS). There should be no overlap of host(s). See the server_export man page for an example.
Exporting a Path
6-9
6
Important: Since the default for root=client[:client] is for no users to be given root access, if you do not enter a value, a root user is unable to create files or directories in this exported file system. To allow a root user write access to the exported file system, export the file system with root privileges to either the Control Station or a trusted host.
Command
server_3: done
If a system does not find the client in the /.etc/hosts or in /.etc/netgroups directory, the system then checks the NIS server, if it is enabled. If the system cannot find client name, an error message appears. Every time the system boots, every entry in the export table is re-exported. To re-export every entry within the table while the system is running, type:
$ server_export movername -all
Export All
The number of NFS entries in the export table that can be "displayed" is 256. If the entries exceed 256 and you enter server_export -list, an error message is displayed instead of a display of the export table, although all entries are supported.
6-10
To allow PC clients NFS access to the Celerra File Server and its files, a PC client must be successfully authenticated. See Figure 6-2.
NFS Server (Celerra File Server) User Authentication Daemon (pcnfsd or rpc.pcnfsd) 1 Sends Username & Password
2 Returns UID/GID
CFS-000214
Figure 6-2
PC Client Access
Since different user authentication methods exist in PC and UNIX environments, an authentication daemon (typically rpc.pcnfsd or pcnfsd) bridges these differences. The daemon runs on the Data Mover and performs the following services: 1. The daemon receives and validates the user name and password provided by the PC client. 2. The daemon assigns the PC client a user ID and group ID (UID/GID) for each user/password combination.
Providing PC User Access
6-11
6
3. PC clients then use the assigned UID/GID to access the Celerra File Server. See Figure 6-2. Typically, if using NFS from a PC, an authentication daemon is already in use and can be used with the Celerra File Server without change. If your PC is not already using NFS, then you need to purchase a PC client NFS software package, such as Hummingbird Communications Ltds PC-NFS or NFS-MaestroTM software.
This section describes how to set up a PC client software package for network access to the Celerra File Server. The examples used are PC-NFS and NFS Maestro. To enable this feature, complete the following steps: 1. Set up a user account. See Task 1: Create a Linux User on page 9-9. 2. Open the /nas/server/server_x/netd file with a text editor, add pcnfs on a separate line, and save the file.
Where x is the number of the Data Mover.
3. Reboot the Data Mover. 4. Export a file system for the user(s) to access. Refer to Exporting a Path on page 6-8. 5. On the PC, launch the PC-NFS or NFS Maestro software
Refer to the vendor s respective user manual for details on what is required at this point for setup and/or login. For example, PC-NFS is capable of detecting any system running pcnfsd in the subnet, while NFS Maestro has an option where you specify the system's name.
6. Enter the username and password as required by the software package. The username and password are sent to the Data Mover running pcnfsd, which returns the User ID/Group ID numbers for the PC client. From this point on, whenever the PC client issues a mount request, the user is authenticated and the rest of the activity is pure NFS traffic typical between a client and server.
6-12
Interoperability issues have been identified in environments with both CIFS and Hummingbird PC NFS clients accessing Microsoft Word or Corel WordPerfect files. Normally, if a CIFS client opens a Microsoft Word file, and a Hummingbird (or any PC-NFS) client tries to delete the file, the delete request is refused due to the deny delete lock imposed when the CIFS client opened the file. The file is also range locked, however, with the offset not congruent with the start or end of the file. Thus, if a portion of the file is outside the range specified by range lock, the file can be written to. Write requests that fall within the range lock are denied. Since there is no method to determine which portions of the file are range locked, users may experience unpredictable results.
Share Authentication
Hummingbird users have the option of overriding Share authentication for Celerra (or any other) drives when these drives are mounted on the client. In these cases, CIFS and Hummingbird clients will have concurrent access to the file. If a CIFS client has a directory open in Windows Explorer and a Hummingbird client subsequently opens and attempts to close a WordPerfect file it has opened in the same directory, the Hummingbird client attempts to lock the file. Because the file is opened by Windows Explorer, a lock already exists, and the Hummingbird lock request is denied. The Hummingbird client continuously issues lock requests to the server until the CIFS client closes the directory and releases its lock.
Directory Locking
6-13
Unexporting Paths/Shares
When performing an unexport of a path or share, you can define the export to be either permanent or temporary. By default, all NFS unexports are temporary, meaning that the next time the system is rebooted, the entry is automatically re-exported. If the unexport is permanent, the entry is deleted from the export table.
By default, all CIFS unexports are permanent.
Command
server_3: done
Sample Output
server_3: done
Unexport Limitations
When you are using NFS, you may unexport a directory or file system only after it is unmounted from a Data Mover. If you export a directory or file system from a Data Mover before it has been unmounted, these error messages appear the next time you try to access the file system:
mount_lookup: No match in the export list /mnt Stale NFS file handle
Unexport Advisory
The server_export option -all -unexport -perm allows you to permanently unexport all file systems in the export table. This option should be used with caution. If you execute this option, you need to subsequently rebuild the export table by re-exporting each path on each Data Mover in order to restore user connectivity to all mounted file systems. server_export -all -unexport unexports every entry within the table, but on a temporary basis.
6-14
6-15
6-16
7
Configuring Standbys
This chapter discusses the failover process and configuring standby Data Movers.
x x x x x
Data Mover Availability....................................................................7-2 Failover Detection ..............................................................................7-4 Configuring Standby Data Movers .................................................7-8 Activating a Standby .......................................................................7-10 Control Station Failover ..................................................................7-15
Configuring Standbys
7-1
Configuring Standbys
7-2
Configuring Standbys
File System
File System
Network
Client
Client
Client
CFS_000045
Figure 7-1
Standby Relationship
7-3
Configuring Standbys
Failover Detection
To detect a Data Mover failover, the Control Station monitors the periodic heartbeat signals that all Data Movers send, using the redundant internal networks that connect the Control Station to each Data Mover. If the Control Station detects a failure, it first attempts to reset and/or cycle power to the Data Mover. If the problem persists, and if CallHome is configured, the Control Station calls EMC with a notification of the event and diagnostic information.
By default, a Data Mover does not call home if it loses connectivity to the network, although it can be configured to do so.
Table 7-1 summarizes the conditions that do and do not trigger Data Mover failover.
Table 7-1
Data Mover Failover Failover Data Mover failover occurs if any of these conditions exists Condition Failure (operation below the configured threshold) of both internal Ethernets. This is the same as the lack of a heartbeat (Data Mover timeout). Power failure within the Data Mover. Software panic. Exception on the Data Mover. Data Mover hang. Memory error on the Data Mover. Removing a Data Mover from its slot. Disconnecting SCSI cables from a Data Mover. Manually rebooting a Data Mover.
A Data Mover reboots itself when it detects a software problem (that is, a software panic or an exception). Typically, the reboot takes less than 100 seconds, and applications and NFS clients do not see any interruption, except for a possible server not responding message during the reboot.
7-4
Configuring Standbys
When any of the above conditions occur, you can transfer functionality from the primary Data Mover to the standby Data Mover without disrupting file system availability. The standby Data Mover substitutes for the faulted Data Mover by assuming the faulted Data Mover s:
x x
Network identity the IP and MAC addresses of all its NICs Storage identity the file systems that the faulted Data Mover controlled Service identity the exported file systems that the faulted Data Mover controlled
The standby Data Mover assumes file system services to users within a few seconds of the failure, transparently, and without requiring users to unmount and remount the file system.
Failover Example
Figure 7-2 shows how failover works. In this example, server_7 is the Primary and server_2 is the Standby. The sequence is as follows: 1. The faulted Primary is renamed server_7.faulted.server_2. 2. The standby Data Mover acquires the name server_7 from the Primary Data Mover. 3. When the Primary Data Mover is restored, server_7 is renamed, and again represents the Primary Data Mover while server_2 represents the Standby.
Failover Detection
7-5
Configuring Standbys
Command: server_standby server_7 -create mover=server_2 -policy manual <should a failure occur>
type=standby state=out_of_service Failover name=server_7.faulted. server_2 standbyfor=server_7 Command: server_standby server_7 -activate mover <once a failure corrected>
type=nas state=online,active Restore name=server_7 standby=server_2, policy=manual Command: server_standby server_7 -restore mover
CFS_000046
Figure 7-2
Failover Example
7-6
Configuring Standbys
CAUTION If a Data Mover fails, Celerra File Server clients retain normal NFS functions, but any ongoing FTP, archive, or NDMP sessions are lost and not restarted. Connections between CIFS clients and the Data Mover are lost, but the redirector on the client will reconnect with the Data Mover after the failover. However, all data cached by the clients prior to failover are lost and data loss can occur. Applications on the client using the shares may not recover.
Failover Detection
7-7
Configuring Standbys
You must first identify and designate the standby Data Mover. Once this is done, link the standby with a primary Data Mover and define the policy of the failover. The failover policy defines the functionality transfer between the primary and the standby. A standby Data Mover can take over functionality automatically or with manual intervention.
Failover Policies
A failover policy is a predetermined action that the Control Station invokes when it detects a Data Mover failover condition. The failover policy type you specify determines the action that occurs in the event of a Data Mover failover. The Celerra File Server offers the policies listed in Table 7-2.
Failover Standby Policy Types Policy Auto Retry Manual Action The standby Data Mover immediately takes over the function of its primary. The Celerra File Server first tries to recover the primary Data Mover. If the recovery fails, the Celerra File Server automatically activates the standby. The Celerra File Server issues a shutdown for the primary Data Mover. The system takes no other action. The standby must be activated manually.
Table 7-2
Verify that the standby Data Mover is operational. Ensure that no file systems are mounted by checking the list of mounted file systems. Have network connectivity of the standby equivalent to that of all the intended primaries.
7-8
Configuring Standbys
Ensure that the standby contains a superset of the network interface cards installed into each primary Data Mover. Ensure that the standby has no configured file systems.
Once you ensure that the above criteria are met for your standby, you can link this standby to all of the primary Data Movers in the same cabinet. It is also possible to have multiple standby Data Movers in a Celerra File Server, with each standby Data Mover acting as a standby for group of primary Data Mover(s). When you set a Data Mover to standby, it is not configured with an IP address. In the case of a standby activation, the standby assumes the primarys IP address. The standby, when activated, functions for only one faulty primary Data Mover at a time. Restrictions When configuring standby Data Movers, you cannot:
x x
Create a standby Data Mover for a standby. Have more than one standby Data Mover per primary Data Mover.
Create Standby
Command
This section describes how to create a standby Data Mover. To create a standby relationship with policy set to manual, type:
$ server_standby movername -create mover=source_movername -policy manual
If the standby Data Mover is a standby for more than one primary Data Mover, you must repeat this procedure for each primary.
Sample Output
If the policy type is set to Auto or Retry, no further action is required if the primary Data Mover fails. However, if the policy is set to Manual, refer to Activating a Standby on page 7-10 for the necessary procedure.
7-9
Configuring Standbys
Activating a Standby
You manually activate a standby Data Mover when the policy type is set to manual. A manual policy type means that the standby does not activate unless you initiate the action. You must have previously designated a Data Mover that is not being used to be the standby prior to activating the standby Data Mover. Once you have designated a Data Mover as a standby and associated it with one or more functioning Data Movers, you activate the standby to take over for the primary Data Mover, should a failure occur. Before performing a manual activation, you must thoroughly investigate the problem first by:
x x
Checking error logs Ruling out other (external) factors such as network, router, client, or storage system problems
7-10
Configuring Standbys
Command
"mover" is typed in the command line exactly as it appears here. The value was defined when performing the create command.
CFS-000195
This process can take from 40 seconds up to 2 minutes to complete. A message appears indicating completion.
Sample Output
server_4 : replace in progress ...done commit in progress (not interruptable)...done server_4 : renamed as server_4.faulted.server_5 server_5 : renamed as server_4
The standby Data Mover is activated and functions as the primary Data Mover. When a failover occurs, the primary Data Mover goes into a faulted state and assumes the movername primary.faulted.standby. The standby Data Mover acquires the movername primary from the primary Data Mover.
The Celerra File Server clients retain normal NFS functionality, but any ongoing FTP, archive, or NDMP sessions are lost and are not restarted.
Troubleshooting Tip #1
The system waits a period of two minutes after creating your standby. If you try to activate your standby before this waiting period is over, you may be unable to connect to the host. Troubleshooting Tip #2 If the following message appears:
server_4 : replace in progress ...failed Error: replace_net: elv0 : No such device or address
Activating a Standby
7-11
Configuring Standbys
7
The standby you are activating does not contain the necessary network interface card. The standby Data Mover must contain the same external network interface cards as the primary Data Mover.
Type nas_server -info movername to view the status and type of a Data Mover. From the output, you are able to determine whether a Data Mover is of type standby or primary and the policy type. Once the failure has been corrected, restore the primary Data Mover as described below. The failure of the primary Data Mover is reported to EMC by the CallHome feature of the Control Station. When the problem has been corrected or the Data Mover has been replaced, the Data Mover first must be rebooted before you can restore it to its original state. Use the server_standby command to reboot the faulted Data Mover.
IMPORTANT: Always use the server_standby command to reboot a Data Mover. If you manually reboot the faulted Data Mover, it broadcasts the same MAC address as the standby Data Mover that took over when the failure occurred. This can cause system conflicts and loss of network connections.
When you execute a restore, you still use the original name of the primary Data Mover (from our example, this would be server_7), not the faulted name it is assigned when out of service. The following procedure describes how to restore a primary Data Mover. Command To restore the primary Data Mover to resume functionality, type:
server_standby movername -restore mover
Name of primary Data Mover. Do not use the faulted name, use the original name assigned to the primary.
"mover" is typed in the command line exactly as it appears here. The value was defined when performing the create command.
CFS-000194
Sample Output
server_4 : server_4.faulted.server_5 : reboot in progress 0.0.3 replace in progress ...done commit in progress (not interruptable)...done server_5 : renamed as server_5
7-12
Configuring Standbys
The primary Data Mover is now restored to a functioning status, the standby is restored to standby status, and the names revert back to their original settings. Other Options To delete the relationship created between the primary and the standby Data Mover, type:
$ server_standby movername -delete mover The movername directly following the command is the primary movername.
To change a Data Mover from standby back to a nas Data Mover, type:
$ server_setup movername -type nas The movername directly following this command is the standby movername since you are changing the type configuration of the standby back to nas. When you change a standby back to a regular Data Mover, the original IP address is no longer associated with the Data Mover; therefore you must assign a new IP address.
If an EMC Customer Service Engineer upgrades a Data Mover to a new hardware model, or replaces a Data Mover with a unit of the same model, you should verify the standby relationship between the new Data Mover and any standby Data Mover(s) in the Celerra using the following command:
$ server_standby movername -verify
Command
Activating a Standby
7-13
Configuring Standbys
7
CIFS Access After Failover
In most cases, during a system reboot, CIFS clients are able to open a new session and re-initialize their contexts. However, since CIFS is not stateless, this can fail to occur in some cases, as with any Windows server. For example, a lock conflict can occur if clients reconnect out of order. Therefore, any application accessing file systems on an activated standby Data Mover may get an error message. Periodically, (about once a month) test standby Data Movers by doing a failover, then failing them back. Standby Data Movers are not tested otherwise. Refer to the server_standby man page or the server_standby entry in the Command Reference Manual.
Periodic Tasks
7-14
Configuring Standbys
The Control Station software, which is used to configure and manage the Celerra File Server, operates independently of the file access operations and services provided by Data Movers. If a Control Station fails:
x x
Individual Data Movers continue to respond to user requests User access to data is uninterrupted
Control Station failure means only that you are temporarily unable to install new software or to modify your Celerra File Server configuration. You can resume these operations as soon as the Control Station becomes available.
The Celerra File Server supports configurations with dual, redundant Control Stations. This configuration lets you configure or modify your Celerra File Server even in the event of Control Station failure. CAUTION Older Celerra cabinets may not support the use of dual Control Stations. If you have an older cabinet, consult your EMC representative to determine whether you need a hardware upgrade. If the primary Control Station goes off line for any reason:
x
The secondary Control Station, if properly configured, automatically takes over all Control Station functions. The secondary Control Station executes any of the Celerra File Server commands that you enter. The Celerra File Server uses the CallHome utility to notify EMC Customer Service of the event.
Under normal circumstances, after the primary Control Station has failed over, you continue to use the secondary Control Station as the primary. When the Control Stations are next rebooted, either directly or as a result of a power down and restart cycle, the Control Station in slot_0 is restored as the primary.
7-15
Configuring Standbys
This activity should be done at the Celerra cabinet console. To change the state of the standby Control Station to that of the primary, from the /nasmcd/sbin directory type:
# cs_standby -takeover Taking over as Primary Control Station done This command can only be performed on the standby Control Station. If you attempt to use this command from the primary Control Station, you will get an error message.
To change the state of the primary Control Station to the standby Control Station, from the /nasmcd/sbin directory type:
# cs_standby -failover The system will reboot, do you wish to continue [yes or no]: y Failing over from Primary Control Station done
7-16
Configuring Standbys
7-17
Configuring Standbys
7-18
8
Managing File Systems
Creating an Automount Map ...........................................................8-2 Displaying Mounted File Systems...................................................8-4 Unmounting a File System ...............................................................8-5 Checking Capacity .............................................................................8-6 Extending a File System ....................................................................8-7 Renaming a File System ....................................................................8-9 Deleting a File System ..................................................................... 8-11 Creating 32-bit GIDs ........................................................................8-12
8-1
After you have created and saved your automountmap file, you must then place it in the appropriate directory for your automount daemon to access the file. To edit an automountmap file, use a text editor. Command To create an automountmap file and print it to the screen, type:
$ nas_automountmap -create mpt1 -rw,intr,nosuid 100.192.168.40,193.1.6.10:/mpt1
Creating an automountmap file and printing it to the screen does not save the automountmap file.
Command
There is no immediate output after executing this command. To view what is in your automount file, type:
$ more outfile bin -rw,intr,nosuid lpce155:/bin
Once you have saved your automount file, use a text editor to make modifications to the file. Command If you have created and exported additional file systems and would like to create a new automountmap file, type:
$ $ nas_automountmap -create -out outfile more outfile a 100.1.1.1:/a a_100.1.1.2 100.1.1.2:/a
8-2
Command
To merge your new automount file with a previous automount file, type:
$ nas_automountmap -create -in infile -out -outfile
If you merge two automount files and the same line appears in both, a conflicting list is generated. You can view conflicting lists within your automountmap file on the screen or print the file to an output file. Command To view a conflicting list, type:
$ nas_automountmap -list_conflict infile
Sample Output
Command
8-3
Sample Output
server_3: fs2 on /fs2 uxfs,perm,rw fs1 on /fs1 uxfs,perm,rw root_fs_3 on / uxfs,perm,rw server_4: file1 on /mpt1 uxfs,perm,rw root_fs_4 on / uxfs,perm,rw
8-4
Command
Sample Output
server_3: done
The default unmount mode is temporary. If you keep this selection, the file system is mounted again when the system reboots. To permanently remove an entry from the mount table, the unmount mode selection must be permanent.
Unmount All
To permanently unmount all of the file systems in the mount table, type:
$ server_umount movername -perm -all The default unmount mode is temporary.
8-5
Checking Capacity
To view file system capacity or Data Mover free space, perform one of the following procedures.
If file system performance slows, your file system may be reaching capacity. If this is the case, you can extend a file system by performing Displaying Mounted File Systems on page 8-4. Once you have identified a file system, to check capacity, type:
$ nas_fs -size fs_name
Sample Output
You can view the amount of disk space occupied by all mounted file systems on selected Data Mover(s). To report all free and used inodes, type:
$ server_df movername -inode
Sample Output
used 2 2
8-6
Preparation
Fail over the primary Data Mover to its standby. Reboot the primary Data Mover. Fail back to the primary Data Mover and perform the extension.
See Chapter 7, Configuring Standbys for more information about configuring standby Data Movers. Procedure The procedure below assumes that the primary Data Mover is server_4 and the standby is server_5. To extend a file system use the following procedure. 1. Verify that server_5 has been designated as a standby Data Mover and check its status.
$ nas_server -i -a $ nas_server -i
8-7
8
3. Verify that the failover occurred successfully.
$ nas_server -l $ /nas/sbin/getreason
4. Once you have verified that the failover was successful, reboot server_4 and verify its status.
$ server_cpu server_4.faulted.server_5 -reboot -m now $ /nas/sbin/getreason
6. Verify the size of the file system before extending it. Record this information. You will use it as a comparison figure after extending the file system.
$ nas_fs -size fs_name $ server_df server_4 Note: The result from each of the above commands should be the same value.
where fs_name is the name of the file system and volume_name is the volume onto which the file system is being extended. 8. Verify the size of the file system after extending it. Compare this value to the value you recorded in Step 6 above, make sure that the volume amount has increased.
If you delete an extended file system, the extended volume remains in use until the original meta volume is deleted.
8-8
Sample Output
id = 17 name = file1 acl = 0 in_use = true type = uxfs volume = meta1 rw_servers= server_4 ro_servers= symm_devs = 0A4,0A5,0A6,05E,05D disks = d74,d75,d76,d4,d3 disk = d74 symm_dev=11-0A4 addr=c0t5l8-06-1 server=server_4 disk = d75 symm_dev=11-0A5 addr=c0t5l9-06-1 server=server_4 disk = d76 symm_dev=11-0A6 addr=c0t5l10-06-1 server=server_4 disk = d4 symm_dev=11-05E addr=c0t1l2-06-1 server=server_4 disk = d3 symm_dev=11-05D addr=c0t1l1-06-1 server=server_4
Means: The ID of the file system (assigned automatically). The name assigned to the file system. The access control value for the file system. Whether the file system is registered into the mount table of a Data Mover. The type of file system. The volume on which the file system resides. The servers with read-write access to the file system.
8-9
Means: The servers with read-only access to the file system. The storage system devices associated with the file system. The disks on which the meta volume resides.
8-10
Sample Output
id = 17 name = file1 acl = 0 in_use = true type = uxfs volume = meta1 rw_servers= server_4 ro_servers= symm_devs = 0A4,0A5,0A6,05E,05D disks = d74,d75,d76,d4,d3 disk = d74 symm_dev=11-0A4 addr=c0t5l8-06-1 server=server_4 disk = d75 symm_dev=11-0A5 addr=c0t5l9-06-1 server=server_4 disk = d76 symm_dev=11-0A6 addr=c0t5l10-06-1 server=server_4 disk = d4 symm_dev=11-05E addr=c0t1l2-06-1 server=server_4 disk = d3 symm_dev=11-05D addr=c0t1l1-06-1 server=server_4
See Renaming a File System on page 8-9, for a description of the command outputs.
8-11
Parameters for Managing GIDs Where: 0 Disables the 32-bit GID feature for the Data Mover. Any file system created on the Data Mover with this setting supports only 16-bit GIDs. Enables the 32-bit GID feature for the Data Mover. Any file system created on the Data Mover with this setting supports 32-bit GIDs with a maximum value of 2 billion.
Celerra allows pre-existing and new file systems with 16-bit GIDs and new NFS file systems with 32-bit GIDs to be mounted on the same Data Mover.
8-12
Maintains a maximum of 64K GIDs per file system. The 64K GID limit applies to the first 64K and should be considered when planning GID usage for your file system. The value of 32-bit GID is capped at 2 billion for Celerra for future support of CIFS.
The 64K limit is quantity. For any file system, you can have any value between 0 and 2 billion, but you can only have 64K different GIDs per file system.
x x x
The 32-bit GID can be used only with NFS file systems. For CIFS file system, you must set param ufs gid32=0. Setting this parameter to 0 allows you to create file systems with 16-bit GIDs. It does not, however, disable 32-bit GIDs on those file systems created with the parameter set to 1. You cannot convert any existing file systems to use a 32-bit GID. The 32-bit GID can be used only with new NFS file systems. You cannot use the Celerra Linux Control Station as an NFS client if 32-bit GIDs are used. The Linux Control Station supports only 64K GID value. If Linux accesses a file system with GID32=1, the number beyond 64K is truncated. You must disable quotas on the Data Mover before you create the file system that uses the 32-bit GID. You cannot set the 32-bit GID with the following server_archive formats: emctar up to 31-bit ustar up to 21-bit cpio up to 15-bit bcpio up to 16-bit sv4cpio up to 21-bit sv4crc up to 21-bit tar up to 18-bit Some backup applications have restrictions. Ensure the application handles 31-bit UID/GID. (For example, Veritas NetBackup NDMP handles 32-bit UID/GID, but Veritas NetBackup network backup via NFS supports only 24-bit UID/GID.) Sun Solaris supports 2 billion GID as a maximum value.
Creating 32-bit GIDs
8-13
8
Setting the 32-bit System Parameter
To set the system parameter for 32-bit GID: 1. Log on to the Control Station. 2. Open /nas/server/server_x/param with a text editor. Where x = the number of the Data Mover where the file system(s) using the 32-bit GID reside. 3. Type or change the parameter to read: param ufs gid32=1. 4. Save the file. 5. Reboot the Data Mover.
To ensure that you create a NFS file system that has the 32-bit GID attribute: 1. Log on to the Control Station. 2. Open /nas/server/server_x/param with a text editor. Where x = the number of the Data Mover that you are about to use to create the file system with the 32-bit attribute set. 3. Ensure that the param file has the following entry:
param ufs gid32=1
4. Use the nas_fs command with the option mover=mover_name option set to the Data Mover (in this case, Data Mover 2) with 32-bit GID enabled to create the file system ufs1. For example, type:
$ nas_fs -name ufs1 -create ufs1 -option mover=server_2 Designating the Data Mover in this way allows you to direct the creation of the file system to a Data Mover with the 32-bit GID attribute enabled.
8-14
9
Managing Your System
This chapter contains system and network administrative tasks that you may be required to perform, such as:
x x x x
Managing Data Movers.....................................................................9-2 Managing Volumes ............................................................................9-5 Controlling Access to System Objects .............................................9-9 Managing System Parameters........................................................9-15
9-1
Adding Internal Events Discovering and saving SCSI devices Rebooting Data Movers Halting a Data Mover
An internal event is a change in the operating status of a Data Mover or Control Station. All events are logged in the event log file. The system event log file, nas_eventlog.cfg, is located in the /nas/sys directory. An internal event contains:
x x
Name of the Celerra facility that issued the event High water mark; that is, the maximum severity of the event (ranging from 1, least severe, to 7, most severe) Event ID number Short description of the event that occurred System-defined action to take when the event occurs
x x x
In order to add an internal event you must first create an event configuration file, then load this file to supplement (but not replace) the system events log file, nas_eventlog.cfg.
CAUTION Do not change nas_eventlog.cfg. When you create an event configuration file, make sure it does not have the same name as the nas_eventlog.cfg. Also, save the configuration file in the /nas/site directory. This ensures that any site specific additions are maintained when the system is upgraded. An event configuration file has one line with the keyword facilitypolicy to denote the facility ID and severity level, followed by one or more lines with the keyword disposition denoting the range of event IDs and the specific action to take when an event is issued with that corresponding event ID range. For example:
9-2
facilitypolicy facility id, severity id disposition range=(From_eventid-To_eventid), action to take Procedure To add an internal event: 1. Log onto the Control Station as nasadmin. 2. Create an event configuration file using a standard text editor. Save the file as filename.cfg in the /nas/site directory. 3. From the /nas/bin directory, load the configuration file.
$ nas_event -L /nas/site/filename.cfg EventLog : will load /nas/site/filename.cfg... done
4. From the /nas/site directory, verify that the file has been loaded.
$ more nas_eventlog.cfg /nas/sys/nas_eventlog.cfg /nas/site/filename.cfg
A listing of all devices is maintained in a database contained within the Control Station. You can periodically probe the storage system to discover all of the present devices and save them back to the device table. CAUTION Discovering and/or saving SCSI devices is a time consuming action, and is therefore best performed when the least resources are required. Discovering and Saving SCSI devices may cause a temporary disruption of Celerra File Server service. The FC4700-2 storage system supports only Fibre Channel connectivity.
Command
Sample Output
server_3 : SCSI devices : chain= 0, scsi-0 symm_id= 0 symm_type= 0 tid/lun= 0/0 type= disk sz= 2076 val= 1 info=526411005221 tid/lun= 1/0 type= disk sz= 8718 val= 2 info=52641105C221 tid/lun= 1/1 type= disk sz= 8718 val= 3 info=52641105D221
9-3
9
tid/lun= 1/2 type= disk sz= 8718 val= 4 tid/lun= 1/3 type= disk sz= 8718 val= 5 tdd/lun= 1/4 type= disk sz= 8718 val= 6 tid/lun= 1/5 type= disk sz= 8718 val= 7 tid/lun= 1/6 type= disk sz= 8718 val= tid/lun= 1/7 type= disk sz= 8718 val= tid/lun= 1/8 type= disk sz= 8718 val= tid/lun= 1/9 type= disk sz= 8718 val= info=52641105E221 info= 52641105F221 info= 526411060221 info= 526411061221 8 info= 526411062221 9 info= 526411063221 10 info= 526411064221 11 info= 526411065221
You can verify when the system comes back up with the server_uptime command.
9-4
Managing Volumes
Once you have configured the volumes and have begun using them to store file systems, you can periodically check their capacity and the remaining amount of unused space and extend them, if needed. You can also rename and delete volumes. Procedures for managing volumes are:
x x x x
Checking volume capacity Extending meta volumes Renaming volumes Deleting volumes
Once you have configured the volumes and have begun using them to store file systems, you can periodically check their capacity and the remaining amount of unused space. To obtain volume size, type:
$ nas_volume -size volume_name
Sample Output
With a meta volume, if the full capacity of the volume is used (100%), you can extend the volume by adding another volume to the configuration.
You can increase the total size of a meta volume by adding another volume to the configuration. The volume added to the meta volume can be a stripe volume, a slice volume, a disk volume, or another meta volume as long it is not in use at the time of concatenation. Once the meta volume has been extended, the new size of the meta volume is equal to the sum of the volumes contained within the new configuration.
Important: You cannot extend a meta volume that is being used by a file system.
Managing Volumes
9-5
9
Command To extend a meta volume, type:
$ nas_volume -xtend volume_name volume_name
Sample Output
The meta volume is extended to include the volume or volumes specified. The size of the meta volume increases by the sum of the volume(s) added. Troubleshooting Tip If the following message appears:
Error: volume : item is currently in used by filesys: file
A file system is mounted on the meta volume you are trying to extend. Since the added volume that is added is now combined into a meta volume configuration, even if you remove the file system that resides on the meta volume, the added volume remains in use until the meta volume is deleted. If you want to extend a file system while it is in use, you must use the nas_fs -xtend command. For details on this procedure, see Extending a File System on page 8-7.
Renaming Volumes
Volumes are given default names when configured unless you specify a name when you created the volume. After volume configuration is complete, you can rename the volumes. To rename a meta volume or stripe volume, type:
$ nas_volume -rename old_name new_name
Command
Command
9-6
Cloning Volumes
You can clone a stripe, slice, or meta volume using the nas_volume -Clone command. This enables you to make an exact copy of a volume. To clone a volume, use the following command:
$ nas_volume -Clone volume_name -option disktype svol:dvol
Command
where:
Parameter volume_name disktype Value Name of the volume being cloned Volume type: BCV for business continuance volume (TimeFinder/FS operations - not available for FC4700-2) STD for standard volume R1STD and R1BCV are for BCV or STD volumes that have been mirrored Source and destination volumes, respectively
svol:dvol
Deleting Volumes
Once volumes are no longer used by a file system, you can delete or change your volume configuration. If the status of a volume is In Use, there may still be a file system using the volume. If a volume was used to extend a meta volume, since the volume that is added is now combined into a meta volume configuration, even if you remove the file system that is residing on the meta volume, the added volume remains in use until the meta volume is deleted.
You must remove the file system before deleting the volume. Volumes must not be In Use when you are deleting them.
Command
Managing Volumes
9-7
9
Command To delete a slice volume, type:
$ nas_slice -delete slice_name
Sample Output
Troubleshooting Tip
A file system is using the meta volume you are attempting to delete.
9-8
Before you can create an ACL entry for a specific user, you must first create a user profile. If NIS (Network Information Service) is available, the Celerra File Server attempts to resolve user names and passwords by searching the NIS database. If you do not have NIS, perform the following steps to assign a user account name and password to one or all Data Movers in the Celerra cabinet: 1. Login to the Celerra File Server Control Station as root and type nasadmin for the password. 2. From nas/sbin, type the following to set up a user account name (John in this example):
Without NIS
Example:
This command launches a program script. You must supply information when prompted.
9-9
9
3. The User ID field is the first prompt and input is mandatory. Enter any integer you wish and press enter. For example:
User ID: 11
4. The next prompt is Group ID. This field is also mandatory and can be any integer. For example:
Group ID: 111
5. Input for the next three prompts is optional. Enter appropriate information or just press enter after each prompt.
Comment: This is Johns account. Home directory:/home/John Shell:/bash
6. The last two prompts set a password for the new user and are also mandatory. Enter the string you wish the new user to use for access to the Data Mover.
Changing password for user John New passwd:xxxx Retype new passwd:xxxx server_2 : done The Celerra File Server accesses a Data Mover by its internal name (server_n), which is server_2 in this example.
7. Repeat steps 1 through 6 for each Data Mover. To assign the same user account and password to ALL Data Movers in the Celerra cabinet, type: Command Example:
# server_user ALL -add -passed Auser
The user name Auser is an example. Follow steps 3 through 6 after executing this command example.
Using the nas_acl command, you can create an access control level table composed of entries that define the privileges to be allowed for specified users and groups.
Important: The user must have been previously created in Linux, if not, refer to Task 1: Create a Linux User on page 9-9.
9-10
Command
The name is the name associated to the entry. After specifying either -user or -group, the numerical_id is the applicable UID or GID. The acl_level represents the level assigned to the entry. Numbers 2 through 4 are available ACL level inputs, with 2 being the most privileged, and 4 the least. Levels 2, 3, and 4, which are established by default are: 2 -- admin - is the most privileged. 3 -- operator - includes privileges for the observer level. 4 -- observer - includes privileges for levels that may exist up to level 4. Levels 5 through 9 may be created using nas_acl, then used as input. Once you have created an ACL table, you can assign ACL lists for file systems, Data Movers and volumes, using the -acl option of the objects associated command. Task 3: Establishing Access Control Lists for Objects on page 9-11 contains details on creating ACLs on Data Movers, file systems, or volumes.
By setting an ACL for an object, you define the privileges for each user trying to access the object. When a user or group tries to access a particular object, the access control level table as established by nas_acl is verified against the access control list established by the relevant command for the object. Commands used to establish ACLs are listed in the following table.
Table 9-1
Creating ACLs Use this command nas_fs -acl nas_server -acl nas_volume -acl Example $ nas_fs -acl 432 fs_1 $ nas_server -acl 432 server_3 $ nas_volume -acl 432 vol65
9-11
9
Access control values 2, 3, and 4 are already established by default and are as follows: 2 -- admin - is the top in hierarchy, and is the most privileged. 3 -- operator - includes privileges for the observer level and any other levels that may exist up to level 9. 4 -- observer - includes privileges for levels that may exist up to level 9. Levels 5 through 9 may be created using nas_acl, then used as input. The Read-Write-Delete Columns When a value is assigned to an object, values are entered into one of four specific positions representing the degrees of operations (owner-read-write-delete). A single digit from the inputted acl_value is applied, starting at the delete column. For example: Owner Read 4 Write 3 Delete 2
The digit in the delete column determines the access level required to issue delete commands to the specified objects. The digit in the write column determines the access level required to issue write commands to the specified objects. The digit in the read column determines the access level required to issue read commands to the specified objects. The number(s) that are left over, if any, are applied to the owner column.
Any number(s) that appears in the owner column relates directly to the index number that appears when an entry is created in the ACL table. The index number is relevant when a number is entered into the owner column indicating that the user created by the entry in the ACL table is the owner of the object. For example: Owner 2 Read 0 Write 0 Delete 0
Rules to Remember
The root user always has universal access; nasadmin, which is usually indicated as the owner (created by default as index entry 1) is treated like any other user. With no owner is specified, a "0" in the read-write-delete column indicates universal access.
9-12
If an owner is specified, a "0" in the read-write-delete column indicates no access for anyone but owner. If an owner is specified, a "1" in the read-write-delete column indicates no access for anyone but owner.
For the purpose of these examples, assume the following ACL table:
$ nas_acl -list index type 1 user 2 user 3 user 4 user 5 user 6 user level admin operator admin observer operator admin num_id 101 102 103 104 105 106 name nasadmin xy xz xt yy yz
Delete is permitted for the admin only. Read-write is permitted for the operator and admin. Read is permitted for the observer, operator, and admin.
Note: The owner value is blank. If a 4-digit value is not specified, other types of operations (read-write-delete) are determined for the user.
No access is permitted for anyone except the owner. Owner privileges are allowed for the nasadmin user. The owner is allowed to read-write-delete.
Alternatively, if no owner had been specified, a "0" in the read-write-delete columns would have indicated universal access.
9-13
9
For ACL value: Owner 3 Read 3 Write 0 Delete 0
Read is permitted for users with operator privileges (includes admin). The owner of the object (index 3 from ACL table) is allowed to read-write-delete. For ACL value: Owner 2 Read 6 Write 4 Delete 2
Delete is permitted for admin. Write is permitted for observer (includes operator and admin). Read is permitted for the user with ACL level of 6 as defined by nas_acl. The owner of the object (index 2) is allowed to read-write-delete. For ACL value: Owner Read 3 Write 2 Delete 2
Delete is permitted for admin. Write is permitted for admin. Read is permitted for operator and admin.
9-14
System Parameters
The /nas/site/slot_param file contains parameters that pertain to the entire system, including all Data Movers. You may want to change parameter values in order to modify the behavior of the Celerra File Server to make it more compatible with your environment. Each parameter has a default value. You have to add a parameter entry only if you want to override the default. If you modify slot_param, you must reboot all of the Data Movers to cause the changes to take effect. The /nas/site/slot_param file contains parameters that pertain to the entire system, including all Data Movers. You may want to change parameter values in order to modify the behavior of the Celerra File Server to make it more compatible with your environment. Each parameter has a default value. You need only to add a parameter entry if you want to override the default. If you modify slot_param, you must reboot all of the Data Movers to cause the changes to take effect. The parameters are read from the file in sequence. If there is more than one entry for the same parameter, the last entry prevails.
files. Modify them using a text editor. Parameters are listed in sequence in the file and have the following format:
$ param module parameter=valueparam Important: After modifying the /nas/site/slot_param file, you must reboot all of the Data Movers.
9-15
9
Table 9-2 lists system parameters that are supported:
Table 9-2
System Parameters Example or Values 1, 2, 4, 8, 16, 32, 64, 128, 256, 512,1024 (256=default) Comments/Description Sets the maximum number of received packets to be processed and sent to upper streams modules in each invocation of the drivers real-time routine. If this number is reached and more received packets require processing, the real-time routine schedules another invocation of itself to process the packets and relinquishes the CPU. Setting this value higher increases the number of packets processed per interrupt. Setting this value lower increases the number of interrupts required to process a given number of packets. Sets the GID mapping for files created on an Windows NT client. 0 assigns the GID of the Primary Domain group to which the user belongs. 1 assigns the Windows NT users GID (as found in the GID field of the.etc/passwd file or NIS database entry). Sets the question mark (?) wildcard matching format for files on the Data Mover. 0 sets Windows 95/98 matching (strict matching). 1 sets Windows NT matching (loose matching). Sets the file system type identifier that is returned to a CIFS client. 0 sets the identifier to UxFS (native Celerra File Server file system). 1 sets the identifier to NTFS (Windows NT File System). Sets the maximum number of threads for multiple Data Movers; this is used with virus checking. Sets the TCP timeout for CIFS. Use this format 0xFFxxyyzz: xx = first timeout in minutes yy = number of probes after the xx minutes zz = time (in seconds) between probes after the xx minutes
Module ana
Parameter rxburst
cifs
useUnixGid
0 (default), 1
cifs
ntWildcardMode
0, 1 (default)
cifs
simulateNTFS
0, 1 (default)
cifs cifs
maxVCThreads tcpkeepalive
9-16
Table 9-2
System Parameters (continued) Example or Values 1, 32 (default) Comments/Description Sets the maximum number of blocks that are cached by NFSv3 asynchronous writes. The default value for this parameter (32) provides optimal performance for system throughput. Setting the value to 1 improves the accuracy of file system quota enforcement. Enables packet reflect for the system. param ip reflect=0 (disables packet reflect) param ip relfect=1 (default; enables packet reflect) Sets restricted file ownership. When set to 1 (default), only the superuser can change the owner of a file. The current owner can only change the group ID to a group to which the owner belongs. When set to 0, chown and chgrp follow the less restrictive POSIX semantics, enabling the owner of a file to change the file ownership or group ID to any other owner or group. Changes the default transfer size for NFSv3 reads and writes.
Module file
Parameter asyncthreshold
ip
reflect
0 (default), 1
nfs
rstchown
0 or 1 (default)
nfs
v3xfersize
9-17
9
Table 9-2
System Parameters (continued) Example or Values number indicating the desired maximum UID (0=default) Comments/Description Sets the maximum user ID to which quotas apply. An entry of 0 indicates no limits. This prevents problems caused by accidentally imposing quotas on very large UIDs. We recommend that you set this parameter to the highest UID the site expects to support. If you need to go higher than this, the parameter can be changed later to a higher value.
Module quota
Parameter maxuid
quota
policy
Specifies the quota checking policy to be used to keep track of disk usage.a If policy=blocks, quota is based on the number of file system blocks (8K) allocated. If policy=filesize, quota is based on file usage, in 1K increments. To avoid problems on CFS file name storage, if I18N is not turned on, non-ASCII characters must not be used in the file name. If you are using Kerberos authentication by using compname to define the server name on the Data Mover, you must have either this parameter or I18N internationalization enabled. When the parameter is set to on, and at least one compname has been created, it is not possible to reset the parameter to 0 until all the compnames are removed. This parameter can also be set for each Data Mover.
shadow
asciifilter
0 (default), 1
a. Before you change this parameter, you must turn quotas off. After you change this parameter, you must reboot the Data Mover and turn quotas back on. Refer to, Celerra File Server Technical Note: Using Quotas, for specific procedures.
Example
Assume you modify the parameter file to establish a maximum UID that is affected by user quotas using the quotamaxuid parameter in the /nas/site/param file, as in the following entry:
param quota maxuid=100000
9-18
If you then try to enforce a quota using the following command and parameters:
$ nas_quotas -edit -fs fs_1 5000 100000 200000
The quota limit is set for UID 5000 and 100000 but is ignored for UID 200000. In this example, nas_quotas -edit opens the quota editor for uid > maxuid, but displays the limits as 0 and ignore the values that are entered. An error message appears in the server_log similar to:
919708458: CFS: 3: invalid uid (200000), greater than maxUid (100000)
Server Parameters
These parameters are specific to the Data Mover to which it is set, and they are included in the /nas/server/server_x (where x is the server number)/param file. Server parameters operate like system parameters, except that they affect only the Data Mover with the edited param file.
Important: After you modify /nas/server/server_x/param, you must reboot the Data Mover for which you changed the param file.
Server Parameters Example or Values # of threads Comments/Description Sets the maximum number of threads for a single Data Mover; this is used with virus checking. 0 = global shares disabled (default). Shares created by Windows clients are specific to the NetBIOS name. 1 = global shares enabled.
Module cifs
Parameter maxVCThreads
cifs
srvmgr.globalShares
0 (default) or 1
NDMP NDMP
bufsz ntape
bufsz and ntape are used together for setting NDMP backup. bufsz and ntape are used together for setting NDMP backup. For example: param NDMP ntape=2 param NDMP bufsz=128
9-19
9
Table 9-3
Server Parameters (continued) Example or Values #_of_tape_drives_attached_to_the_ Data Mover (default =0) Comments/Description nbuf is added to bufsz and ntape when PAX backup is involved. For example: param NDMP ntape=2 param NDMP bufsz=128 param PAX nbuf=8 To avoid problems on CFS file name storage, if I18N is not turned on, non-ASCII characters must not be used in the file name. If you are using Kerberos authentication by using compname to define the server name on the Data Mover, you must have either this parameter or I18N internationalization enabled. When the parameter is set to on, and at least one compname has been created, it is not possible to reset the parameter to 0 until all the compnames are removed. This parameter can also be set for the entire system.
Module PAX
Parameter nbuf
shadow
asciifilter
0 (default), 1
ufs
gid32
0 (default) or 1
0 = disables the 32-bit GID feature for the Data Mover. Any file system created on the Data Mover with this setting supports only 16-bit GIDs. 1 =enables the 32-bit GID feature for the Data Mover. Any file system created on the Data Mover with this setting supports 32-bit GIDs with a maximum value of 2 billion. Important: Before enabling this parameter, refer to Creating 32-bit GIDs on page 8-12, for its operating restrictions.
9-20
10
Control Station Utilities
This chapter describes the backup and restoration procedures necessary to insure a successful recovery of the nas database. It also describes how to enable daemons and reboot or halt the Control Station.
x x x x
Database Backup ..............................................................................10-2 Enabling Daemons ...........................................................................10-3 Rebooting the Control Station(s)....................................................10-5 Halting the Control Station(s) ........................................................10-7
10-1
10
Database Backup
Residing within the Control Station is the nas database that is created during the installation of Celerra File Server. The nas database stores specific information required for each Data Mover. The Celerra File Server automatically performs a backup of the entire database every hour and saves it to a file named nasdb_backup.1.tar. This file is located in the /home/nasadmin directory. To back up this file, you can use FTP to copy the file to another destination on your network.
CAUTION EMC strongly recommends that this database file be regularly copied from the Control Station and saved to a remote location, especially when changes to the configuration have been implemented. If a restoration from the backup file of the entire database is required, it will be performed by qualified EMC service personnel.
10-2
10
Enabling Daemons
There are daemons that must be running on the Control Station at all times. If for some reason they become disabled, this can cause certain facilities to fail. By default, after installation, the daemons should be activated and running; however, should you find that they are not, perform the following procedures.
Configuring NTP
Since NTP is not the default timing service for Linux on the Control Station, you must first create an NTP configuration file and start the NTP daemon before the timing service may operate. You only have to perform this procedure if you want to synchronize your Data Movers to the Control Station. After you complete this procedure, start time synchronization using the server_date command. See Configuring Time Services on page 4-9. To configure NTP as the time service protocol on the Control Station, follow these steps:
Step 1. 2. Action Change to root and enter the root password. To create the config file using the vi editor, type: $ vi /etc/ntp.conf Enter the following line into the ntp.conf file: $ server IP_address where: IP_address is the address of the NTP time broadcast server. The customer provides this address. Example: server 168.159.9.10 Save the file, then exit. To start the NTP daemon, type: /usr/rc.d/init.d/xntpd start To verify that the daemon is running, type: ps -e|grep in.xntpd
3.
4. 5.
6.
Enabling Daemons
10-3
10
Troubleshooting If the NTP daemon was not running in Step 6, follow these steps:
Step 1. Action To check the local time of the Control Station, type: $ date Compare this value with the time value that is being provided by the NTP server. If they are not relatively close (within several minutes), change the local time on the Control Station so that the two are more closely synchronized. Type: $ date hhmm Return to step 6 in the previous section.
2.
3.
To view whether the nas daemons are enabled at the Control Station, type:
ps -e|grep nas
The status of the nas daemons appear on your screen. If the daemons are not running, you must reboot your Control Station.
For the Celerra File Server Manager to be able to manage your Celerra File Server system, it is essential for the httpd daemons to be running on the Control Station at all times. To view whether the httpd daemons are enabled at the Control Station, type:
ps -e|grep httpd
The status of the httpd daemons appear on your screen. If the daemons are not running, you must reboot your Control Station. Refer to Rebooting the Control Station(s) on page 10-5 to perform a reboot of your Control Station.
10-4
10
Locally
Rebooting a Single Control Station
To locally reboot a single Control Station or both Control Stations, perform one of the following procedures: If your system has one Control Station, follow these steps to reboot it:
Step 1. 2.
Action Change to root and enter the root password. Type: reboot Result: The Control Station reboots.
If your system has two Control Stations, follow these steps to reboot:
Step 1.
Action To determine which Control Station is functioning as the primary and which as the secondary, type: /nas/sbin/getreason Result: The primary Control Station returns a reason code of 10; the secondary Control Station returns a reason code of 11. Place the Control Switch (located on the back of the front door) to whichever Control Station is functioning as the primary. Change to root and enter the root password. Type: reboot Result: The primary Control Station reboots and fails over to the secondary Control Station.
2. 3. 4.
10-5
10
Step 5. 6. 7. Action Reset the Control Switch to communicate with the new primary Control Station. Change to root and enter the root password. Type: reboot Result: The new primary Control Station reboots and fails over back to the original Control Station.
Remotely
From the command-line interface, using Telnet, perform the following steps:
Step 1. 2. Action Change to root and enter the root password. Type: reboot Result: The Control Station performs an orderly reboot.
To continue working remotely after the reboot, establish another Telnet connection.
10-6
10
Locally
Halting a Single Control Station
To halt a single Control Station or both Control Stations, perform one of the following procedures. If your system has one Control Station, follow these steps to perform a halt:
Step 1. 2. Action Change to root and enter the root password. Type: /sbin/init 0 Result: The Control Station performs an orderly shutdown.
If your system has two Control Stations, follow these steps to perform a halt: CAUTION You must halt your secondary Control Station before halting your primary.
Step 1.
Action To determine which Control Station is functioning as the primary and which as the secondary, type: /nas/sbin/getreason Result: The primary Control Station returns a reason code of 10; the secondary Control Station returns a reason code of 11. Place the Control Switch (located on the back of the front door) to whichever Control Station is functioning as the secondary. Change to root and enter the root password. Type: /sbin/init 0 Result: The secondary Control Station halts and fails over to the primary Control Station.
2. 3. 4.
10-7
10
Step 5. 6. 7. Action Reset the Control Switch to communicate with the primary Control Station. Change to root and enter the root password. Type: /sbin/init 0 Result: The Control Station performs an orderly shutdown.
Remotely
From the command-line interface, using Telnet, perform the following steps:
Step 1. 2. Action Change to root and enter the root password. Type: /sbin/init 0 Result: The Control Station performs an orderly shutdown.
To reboot the Control Station, you must use the Celerra cabinet console.
10-8
11
Troubleshooting
This chapter contains procedures, information, error messages and tips for troubleshooting your system.
x x x
Troubleshooting................................................................................ 11-2 Checking Log Files......................................................................... 11-11 Monitoring System Activity ......................................................... 11-12
Troubleshooting
11-1
Troubleshooting
11
Troubleshooting
While using your system, various messages may appear indicating successful command execution, or in some cases, a failure. Error messages appear when there is a fault in the command syntax or the system, while system messages are routinely reported to the log file. Both types of messages reflect the performance of your system and can be used to monitor system efficiency and to troubleshoot problems. In the error message table, the first column contains the error message you may see when you attempt to execute your command. In the second column, the probable cause and the solution are presented. There are some cases when no message appears relating to a problem. Instead situations present themselves which in turn, may indicate a problem. These occurrences are represented in this chapter by scenario tables. In the scenario table, the first column contains the symptom you may see when you attempt to execute your command. In the second column, the probable cause, and in the third, the solution.
After installation has been completed and you have begun to configure your Celerra File Server, there are a few error messages that may appear indicating a specific condition. See Table 11-1 for a list of error messages, their probable causes, and solutions.
11-2
Troubleshooting
11
Table 11-1
Error Message A network error occurred: unable to connect to server (TCP Error: Broken Pipe)
When attempting to put the Control Station path into Netscape, one of the following may have occurred: the path may have been entered incorrectly, or the server connections may be down The server may be down or the daemons are not running unreachable. First, verify that the Control Station is operational, then either enter the proper Control Station IP address or refer to Try connecting again later. Enabling Daemons on page 10-3. NAS_DB not defined After installing the software package, you may not have logged out. Before executing any commands, you must first log out and then log back in. When attempting to put the Control Station path into Netscape, one of the following may have occurred: the path may have been entered incorrectly, or the server connections may be down the daemons are not running First, verify that the Control Station is operational, then either enter the proper Control Station IP address or refer to Enabling Daemons on page 10-3.
This location (URL) is not recognized... Check the location and try again.
Troubleshooting
11-3
Troubleshooting
11
Volume Troubleshooting
During volume management, certain error messages may appear indicating an error in your command execution. Table 11-2 contains examples of messages that you may encounter and how to remedy them.
Volume Error Messages Probable Cause/Solution You may have attempted to extend a Disk, Stripe, or Slice Volume. Meta Volumes are the only volume type that can be extended. Select a Meta Volume for extension, then retry. You may be trying to create a file system on a volume other than a Meta Volume. File systems can only be created and stored on Meta Volumes. Create a Meta Volume for the file system, then retry. You may be trying to extend a Meta Volume that is in use by a file system. Select a Meta Volume that is not in use, then retry. You may have attempted to extend a Disk, Stripe, or Slice Volume. Meta Volumes are the only volume type that can be extended. Your volume has run out of space and is unable to accommodate additional files or file systems. You can extend a Meta Volume by adding additional volumes to your base Meta Volume after removing file systems. You may be attempting to execute a command against a root file system or volume to which you do not have access. Select a non-root volume or file system.
Table 11-2
11-4
Troubleshooting
11
This section contains two tables to assist in troubleshooting file system problems. Table 11-3 consists of error messages that may occur while you are performing specific file system functions, while Table 11-4 contains scenarios.
There can be more than one symptom for the same problem, and in some cases more than one probable cause.
Table 11-3
File System Error Messages Probable Cause/Solution You may be trying to delete a mounted file system. Verify that this is the correct file system to be deleted, permanently unmount the file system, then retry. You may be attempting to execute a command to file system that must be mounted. Mount the file system, then retry. The file system is already mounted by another Data Mover. Verify the list of mounted file systems. The file system is still mounted. Unmount the file system, then retry. The mount point wasnt entered correctly or doesnt exist. When entering the mount point name, the slash (/) that precedes the mount point name may have been omitted. Type a slash before entering the mount point name, then retry. If the error message reappears, check your list of mount points.
Error Message filesystem is mounted, can not delete filesystem is not mounted filesystem unavailable for read_write mount item is currently in use by movername Mount Point Name [name] is not valid. Please Re-enter
You may be trying to execute a file system check against a mounted file system. Unmount the file system, then retry. The value being typed either does not exist or does not exist as typed (typo). Check that you are entering the correct value and insure that the uppercase and lowercase letters match. Note: All mount points begin with a forward slash /
Troubleshooting
11-5
Troubleshooting
11
Table 11-3
File System Error Messages (continued) Probable Cause/Solution A file system is already using the mount point you are attempting to mount. Create a new mount point, or unmount the file system, then retry. You may be attempting to execute a command against a root file system or volume to which you do not have access. Select a non-root volume or file system, then retry. The client you are trying to export for is not recognized. Enter the clients name into the system, then retry.
Error Message Path busy: filesystem fsname is currently mounted on mountpoint requires root command
undefined netgroup
Table 11-4
File System Scenarios Probable Cause There are many probable causes for this scenario. Many will provide an error message, though occasionally, there will not be one. In this case, the mount table entry already exists. Solution Perform a mount all to activate all entries in the mount table. Obtain a list of mounted file systems, then observe the entries. If the file system in question is already mounted (temporary or permanent) perform the necessary steps to unmount it, then retry. Perform a permanent unmount to remove the entry from the mount table.
An unmounted file system reappears in the mount table after a system reboot.
The file system may have been temporarily unmounted prior to reboot.
When a file system is full, if you try to copy or create a big file, you will receive a message indicating that the file system is full. If you create a small file, you will not receive an error message; however, the file size is zero and no data is held.
11-6
Troubleshooting
11
Table 11-5 lists Data Mover error messages, while Table 11-6 describes potential problems that may occur with Data Movers.
Data Mover Error Messages Probable Cause/Solution When creating a Standby Data Mover, the network interface cards between the Standby and the Primary are not identical. Reconfigure and install a network interface configuration to a Data Mover or select another Data Mover. The interface name may not have been entered correctly. Check the list of Network Interfaces, then retry. The policy type defined when creating the standby relationship is invalid or entered incorrectly. Valid policy types are: auto, manual, retry. You are trying to execute a command to a Data Mover that is in a faulted state, and is, therefore, unavailable. You must first restore the original primary Data Mover before executing a command. You are trying to execute a command to a Data Mover that is of type standby, and is, therefore, unavailable. You are trying to execute a command to a Data Mover that is in a faulted state, and is, therefore, unavailable. You may be trying to activate a Data Mover that is not of type=standby. Create a standby Data Mover, then retry.
is a standby server
is in a faulted state
non-root filesystems are mounted You may be trying to change a Data Mover with mounted file systems to be type=standby. Select another Data Mover for standby or unmount all file systems. server not responding A Data Mover is either rebooting in the background or a failure has occurred. First attempt to reboot your Data Mover, then activate your standby Data Mover, then further investigate or call EMC Customer Service.
Troubleshooting
11-7
Troubleshooting
11
Table 11-5
Data Mover Error Messages (continued) Probable Cause/Solution You may need to reboot your server. Reboot your Data Mover, then retry.
Error Message Server_n replace in progress... failed Error: replace_net:interface:failed to complete command
Slot #, Over temperature warning The internal temperature of the Data Mover has reached 63 degrees C. Insure proper air flow for SNSD and adequate ambient operating temperature, then call EMC Customer Service. Slot #, Over temperature failure The internal temperature of the Data Mover has reached 71 degrees C, therefore causing it to fail. Call EMC Customer Service. software license to enable group not installed standby is not configured You may have attempted to use an unavailable feature. You may have attempted to activate a Data Mover that is not of type=standby. Set the Data Mover to be of type=standby, then retry. If you have a standby linked to more than one primary, the standby may have already been activated for the other primary Data Mover. You can designate another standby to take over functionality for your failed primary. When a failover occurs, the original primary Data Mover becomes the standby for the acting primary (the Data Mover originally set to be type=standby). You may have attempted to perform a restore of a Data Mover that has not undergone a failover. A command is already in progress. Wait until execution is complete, then retry. The Data Mover you are trying to reach may have lost connectivity. Verify the state of the Data Mover by checking uptime, then reboot.
11-8
Troubleshooting
11
Table 11-6
Data Mover Scenarios Probable Cause Data Mover may have lost its connection either physically, or from the network, or may be out of memory or free space. Storage system is off-line. The new movername does not assume the old movername. Instead it takes the next available movername. For example, if you have 5 Data Movers, the next Data Mover name would be server_6. Solution You can reboot the Data Mover, then check free space and memory. If these appear acceptable, you can verify that the cables are secure, then perform a ping or view system uptime. Verify that the storage system is online, then retry. You can either rename the Data Mover or you can leave the Data Mover with the newly assigned name.
When attempting to view SCSI devices, the system hangs. When installing a replacement Data Mover into a slot, the movername is not the same as the original movername.
Error messages indicating a problem with a Data Mover may also appear in the system log, therefore, during the process of troubleshooting, you should periodically refer to the log to check for the presence of certain error messages. These error messages appear only in the system log and will not appear on your screen during an active session.
Troubleshooting
11-9
Troubleshooting
11
See Table 11-7 for a listing of troubleshooting error messages and general scenarios.
Table 11-7
System Log Error Messages and General Scenarios Probable Cause/Solution The system is hung. Call EMC Customer Service.
Error Message KERNEL: 3: addrspac: cannot find 1 pages, rechecking, caller:15cf02 UFS: 3: could not allocate cylinder group LIB: 4: malloc() for 330 bytes failed in more_memory When attempting to perform a Save As of a log file, Netscape crashes.
The system is hung. Call EMC Customer Service. The system is hung. Call EMC Customer Service.
You are in continuous update instead of snapshot. You cannot perform a Save As of a log file while there are continuous updates. Select Snapshot before performing a Save As.
11-10
Troubleshooting
11
Log Files Output Displays the current log updates. Command Line Equivalent server_log movername
Complete System
Displays a complete history of logs for a Data Mover The System Log displays a cumulative list of system activities and messages from the most recent reboot. Displays a log of all commands executed for the Celerra File Server
Command
NAS_DB/log/com_log
11-11
Troubleshooting
11
Monitoring System Performance Output Packet statistics and connection statuses are displayed. Routing table statistics are displayed. Statistics regarding specific interfaces are displayed. NFS statistics are displayed. RPC statistics are displayed. Command Line Equivalent server_netstat movername -s -p protocol server_netstat movername -r server_netstat movername -i server_nfsstat movername -n server_nfsstat movername -r
Type of Display Protocol Routing Table Interface NFS V2 and V3 RPC Data Mover System
All Data Mover statistics are displayed. server_nfsstat movername Threads information, memory status, and the state of the CPU are displayed. TCP and/or UDP connections appear. Server message block (SMB) statistics are displayed. server_sysstat movername
11-12
A
Technical Specifications
This appendix provides technical specifications for the Celerra File Server and covers the following topics:
x x x x
Physical Data .....................................................................................A-2 Environmental Data..........................................................................A-2 Power Requirements ........................................................................A-3 Hardware/Software Specifications ................................................A-4
Technical Specifications
A-1
Technical Specifications
Physical Data
Depth Width Height Access (Raised) Floor Tile Requirements 36.75 in. (93.35 cm) 24 in. (60.96 cm) 73.875 in. (187.64 cm) EMC assumes 24 in. (60.96 cm) floor tiles and requires 11 in. (28 cm) raised floor clearance for cabling. Weight support for 1175 lb. (533 kg) is required. Service Area Floor Space for Cabinet 48 in. (1.22 m) service clearance is required at the front and rear of the Celerra File Server cabinet. 7.5 sq. ft. (3.30 sq. m)
These specifications apply to the Celerra File Server cabinet only. Multi-enclosure configurations will vary.
Environmental Data
Operating Temperature Operating Altitude (maximum) Operating Humidity 59--90 degrees F (15--32 degrees C) Sea level to 8000 ft. (2,500 m) Between 10% and 80%, non condensing
A-2
Technical Specifications
Power Requirements
For the Celerra File Server Cabinet The customer must supply a single (or dual, if required) Russellstoll 3933 VAC connector. Single (or dual if required) Russellstoll 3750 208 VAC power plug is shipped with the Celerra File Server. The local site circuit must be rated at 30 Amps. For the Modem Line The modem line requires:
x
Analog phone line with RJ11 jack CAUTION This phone line must be separate from the storage system phone line.
110 VAC power receptacle for a modem connection (for configurations in the United States)
Power Requirements
A-3
Technical Specifications
Hardware/Software Specifications
Data Movers
x x x x
x x x x x x
FTP NFSv2 and NFSv3 concurrently over TCP/IP and UDP/IP CIFS over TCP/IP Fast Ethernet (10Base-T/100Base-TX), Gigabit Ethernet (FDDI and ATM-OC3 are supported in earlier versions of the Celerra File Server) UxFS File System UNIX archive utilities (tar cpio) SNMP MIB II manageability Redundant Ultra Fast Wide Differential SCSI interfaces Autonomous Data Mover architecture Data Mover failover Ethernet (FDDI is supported in earlier versions of the Celerra File Server) SNMP MIB II manageability Dual Redundant Control Stations (optional second Control Station) Telnet manageability Remote management with an HTTP server management interface Battery backup N+1 load-sharing power supplies Hot-swappable subassemblies Redundant internal Ethernet for environmental status monitoring and control Auto-Call remote maintenance parameter monitoring Power Consumption (kVA) 1.34 Heat Dissipation (BTU/hr) 4,563
Control Station
x x x x x
Celerra Cabinet
x x x x x
x x
Values represent maximum figures for Celerra File Server Cabinets only. Requirements for multi-enclosure configurations will vary.
x x x x x x
UL-950 IEC 950/EN 60950 CISPR 22 Class A/EN 55022 CSA C22.2 No. 950 FCC Subpart B IEC 801-2/EN 55024-2
A-4
B
Customer Support
This appendix reviews the EMC process for detecting and resolving software problems, and provides essential questions that you should answer before contacting the EMC Customer Support Center. This appendix covers the following topics:
x x x x x x
Overview of Detecting and Resolving Problems ......................... B-2 Troubleshooting the Problem .......................................................... B-3 Before Calling the Customer Support Center ............................... B-3 Documenting the Problem............................................................... B-4 Reporting a New Problem ............................................................... B-4 Sending Problem Documentation................................................... B-5
Customer Support
B-1
Customer Support
Contact the EMC Customer Support Center: (800) SVC-4EMC U.S.: Canada: (800) 543-4SVC Worldwide: (508) 497-7901
Figure B-1
B-2
Customer Support
Please do not request a specific support representative unless one has already been assigned to your particular system problem.
B-3
Customer Support
B-4
Customer Support
B
Results from tests that you have run Other related system output Other information that may help solve the problem
E-mail FTP U.S. mail to the following address: EMC Customer Support Center 45 South Street Hopkinton, MA 01748-9103 If the problem was assigned a number or a specific support representative, please include that information in the address as well.
B-5
Customer Support
B-6
C
GNU General Public License
This section contains the GNU General Public License (GPL). The GPL is the license for the Linux operating system. All EMC software, including the Celerra File Server software, is licensed by the EMC Software License included in the software kit.
x x x x
GNU General Public License........................................................... C-2 Preamble............................................................................................. C-2 Terms and Conditions for Copying, Distribution, and Modification....................................................................................... C-3 NO WARRANTY .............................................................................. C-8
C-1
Preamble
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
C-2
C
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each authors protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyones free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow.
C-3
C
all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.
C-4
C
In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
C-5
C
4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License.However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the
C-6
C
integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and any later version, you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
C-7
C
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
C-8
Glossary
This glossary contains terms related to disk storage subsystems. Many of these terms are used in this manual.
A
Access-control List Contains information about the users and groups that are allowed access to an object. Indicates the mode (user-defined state) of a physical I/O path. I/O is queued to an active path whenever available. Card that provides the physical interface between the Celerra File Server cabinet, the network and disk devices (NIC or SCSI adapter). Allows systems to query the network to identify a machine with a specific internet address. See Ethernet Trunking. Set of calling conventions that define how to invoke a service through a software package. A Fibre Channel topology structured as a loop and requiring a port to successfully negotiate to establish a circuit between itself and another port on the loop. Known as a fast packet technology. ATM packetizes voice, video, and data, forming 53-byte frames which are put onto a high-speed data channel.
Active
Adapter
Address Resolution Protocol (ARP) Aggregation Applications Program Interface (API) Arbitrated Loop
g-1
Glossary
ATM Switch
ATM switching system for public telecommunications networks. Provides ATM multiplexing, Virtual Path switching, and Virtual Channel switching. Describes the condition of a power device, host bus adapter, or device path. Process for verifying the identity of a user who is trying to access a file or directory. Procedure in which a remote backup server running an NDMP-compliant backup tool manages the local backup of the Celerra File Server. The network transmits only control information between the remote backup server and the Data Mover. The backup copy is created on a tape library unit locally attached to the Data Mover. Backup data does not travel across the network. Procedure in which a remote backup server, running an NDMP-compliant backup tool, copies data from a Data Mover over the network to its local tape library as if the files were locally attached to the backup server. Backup data travels across the network. To perform automated network backup, the Data Mover must be NFS-enabled. The accessibility of a computer system or network resource.
Attribute
Authentication
Availability
B
Backup and Restore Technique that aims to ensure file system integrity and security by using a procedure that creates a copy (usually on tape) of a file system or incremental changes to a file system, from which that file system can later be restored. Domain controller that has the same database as the PDC and may replace the PDC if needed. The BDC provides some fault tolerance and load balancing for the NT domain. The maximum amount of data that can be transmitted through a data channel per unit of time. Usually expressed in megabytes per second. See Ethernet Trunking. Bits per second.
Bandwidth
Glossary
Bytes per second (B/S). A device that provides a connection between two or more LANs. Provides the service that gathers and organizes the list of computers and domains displayed in Network Neighborhood. The Browse Master maintains an up-to-date list of network resources and provides this information to other computers on the network. Program, usually graphically based, used to access information over the Internet or an intranet. Celerra File Server Manager uses either Netscape Navigator or Internet Explorer as its browser. Storage area used for handling data in transit. Buffers compensate for differences in processing speed between networks. See Ethernet Trunking. Business Continuance Volumes are copies of active production volumes that can be used to run simultaneous tasks in parallel with one another. This gives customers the ability to do concurrent operations, such as data warehouse loads and refreshes or point-in-time backups, without affecting production systems.
Browser
Buffer
C
Cache Random access electronic storage used to retain frequently used data for faster access by the channel. Unit of cache equivalent to one track. Physical component of Celerra File Server which houses the Data Movers and the Control Station. The Celerra File Server cabinet provides an interface between the storage system and the external network. Web-based GUI used to manage and administer the Celerra File Server. Java client/server application that lets you observe the performance of a Symmetrix system that is attached to a Celerra File Server and that of any Data Movers mounted in the Celerra File Server cabinet.
g-3
Glossary
Channel
Path that allows for the rapid transfer of data between a device and storage. Component in the Symmetrix system that interfaces between the host channels and data storage. It transfers data between the channel and cache. Front-end device that requests services from a server, often across a network. File system that uses the Server Message Block (SMB) protocol to provide secure file access and transfer to a multitude of hosts such as LANs, Intranets, and the Internet. CIFS separates naming conventions tied into SMB and allows use of any chosen standard, (e.g.,Domain Name Service or DNS). CIFS complements existing file access protocols such as HTTP, FTP, and NFS. A hardware and software component of the Celerra File Server that provides the controlling subsystem to the Data Movers, as well as the software interface to all server components. The Control Station is used to install, configure, and monitor Celerra File Server components. Resides in slot_0 of the Celerra File Server cabinet. The ability of hardware devices or software to communicate with other hardware or software.
Channel Director
Client
Control Station
Connectivity
D
Data Access Real Time (DART) Operating system software that runs on the Data Mover. It is a real-time, multi-threaded operating system optimized for file access, while providing service for standard protocols. Access to any and all user data by the application. Celerra File Server cabinet component running software that retrieves files from a storage device and exports the file to a network client. Commonly used algorithm for encrypting and decrypting data. Name given to the physical device of a network adapter.
g-4
Glossary
No existence of room in cache for the data presented by the write operation. Hostname or IP address of the machine to which you are routing. System-level tests or firmware designed to inspect, detect, and correct failing components. These tests are comprehensive and self-invoking. Most commonly known as a magnetic disk device.
Component in the Symmetrix system that allows it to transfer data between the host channels and disk devices. See also Channel Director and Disk Director. Preventative measures using redundant hardware, software, data centers and other facilities to ensure that a business can continue operations during a natural or man-made disaster and if not, to restore business operations as quickly as possible when the calamity has passed. Allows specific users and groups to perform a specified action.
Disaster Recovery
A storage device on the FC4700-2 that includes an enclosure, up to 10 disk modules, one or two Fibre Channel Link Control cards (LCC), and one or two power supplies. A storage device on the FC4700-2 that includes an enclosure, up to 10 disk modules, two Storage Processors (SP), two Fibre Channel Link Control cards (LCC), and two power supplies. A DPE can support up to 11 DAEs (each with up to 10 disk modules) in addition to its own 10 disk modules, for a total of 120 disk modules. Component in the Symmetrix system that interfaces between cache and the disk devices. Represents a group of machines by a given name that is defined by the internet community. Option allowing name resolution to be conducted within a system.
Disk Director
Domain
g-5
Glossary
Dual-Initiator
Symmetrix system feature that automatically creates a backup data path to the disk devices serviced directly by a disk director, if that disk director or the disk management hardware for those devices fails. Symmetrix system feature that automatically transfers data from a failing disk device to an available spare disk device without affecting data availability. This feature supports all non mirrored devices in the Symmetrix subsystem.
Dynamic Sparing
E
EMCNAS EMC Network Access Storage. Installation package that installs the Celerra File Server only on the primary Control Station. Installation package that loads various software components on the optional (standby) Control Station. Install this package only after loading emcsys and emcnas. EMC System. Installation package that loads certain Celerra File Server utilities on both the primary and the optional (standby) Control Station. Install this package before emcnas and nassby. A combination of intelligent storage systems, software and services. Together, these products and services enable an enterprise to store, retrieve, manage, protect and share information from all major computing environments, including UNIX, Windows NT, Windows 2000 and mainframe platforms. See Ethernet Trunking. LAN technology that transfers packets ranging from 48 to 1500 bytes. Ethernet uses a media access method that listens to the wire before transmitting in order to minimize packet collisions. Supported cable media and interface connectors include: 10Base-2 thin wire coaxial cable with BNC interface (up to 200 m); 10Base-5thick wire coaxial cable (up to 500 m); 10Base-Thub (star) topology with twisted-pair drop cables and RJ45 interface; and 10Base-Fhub topology with optical fiber drop cables. Ethernet Trunking Data transmission methodology that combines or aggregates up to eight full-duplex, point-to-point communication links into a bundle.
EMCNASSBY
EMCSYS
Enterprise Storage
EtherChannel Ethernet
g-6
Glossary
Each bundle appears as a single link with only one IP address for the entire bundle. Extended Industry Standard Architecture (EISA) Bus that is 32 bits wide with a transfer rate of 40 MB/s. It is backward compatible to support ISA devices. Also known as Enhanced Integrated System Architecture.
F
Fabric A Fibre Channel topology structured with one or more switching devices that interconnect Fibre Channel N_Ports and route Fibre Channel frames. Data is immediately and nondisruptively routed to an alternate data path or device in the event of an event of an adapter, cable, channel controller or other device. Can be referred to as 100Base-T. A 100 MB/s version of 10Base-T Ethernet that uses the same media access as Ethernet. Fast Ethernet provides a nondisruptive, smooth evolution from current 10Base-T Ethernet to high-speed 100 MB/s. In the Symmetrix system, a write operation at cache speed that does not require immediate transfer of data to disk. The data is written directly to cache and is available for later destaging. Fibre channel is nominally a one-gigabit-per-second data transfer interface technology, although the specification allows data transfer rates from 133 megabits per second up to 4.25 gigabits per second. Data can be transmitted and received at one-gigabit-per-second simultaneously. Common transport protocols, such as Internet Protocol (IP) and Small Computer System Interface (SCSI), run over Fibre Channel. Consequently, high-speed I/O and networking can stem from a single connectivity technology. A standard for a shared access loop, in which a number of Fibre Channel devices are connected (as opposed to point-to-point transmissions). Celerra does not support arbitrated loop. High-speed (100 MB/s) networking standard for LANs or Metropolitan Area Networks (MANs). The underlying medium is fiber optics and the topology is a dual-attached, counter-rotating Token Ring. FDDI connections can be identified by the orange fiber cable.
Celerra File Server Basic Administration Guide
g-7
Failover
Fast Ethernet
Fast Write
Fibre Channel Arbitrated Loop (FC-AL) Fiber Distributed Data Interface (FDDI)
Glossary
Component that is replaced or added by service personnel as a single entity. A file system is composed of the files and directories on each individual disk partition. High-level protocol for transferring files from one machine to another. Implemented as an application-level program (based on the OSI Model), FTP uses Telnet and TCP protocols. Copies all the scanned client files, independent of the time of their last backup or their location, to the backup server.
Full Backup
G
Gatekeeper Communication channel between the Celerra File Server and the storage system disk array. Device through which computers may connect. Host or IP address of the gateway machine through which you are routing. Transmission standard that provides a data rate of 1 billion b/s (one gigabit). Gigabit Ethernet is defined in the IEEE 802.3ab standard. 109 bytes, defined as 2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. Abbreviated as G or GB. Interface used to administer commands and execute functions.
Gigabit Ethernet
Gigabyte (GB)
H
Host Hub Addressable end node capable of transmitting and receiving data. A Fibre Channel device used to connect several devices (such as computer servers and storage systems) into a Fibre Channel-Arbitrated Loop (FC-AL).
I
Incremental Backup A backup method that copies only those client files that have changed since the previous backup of any level.
g-8
Glossary
Inode
Type of data structure that describes a file system; notably, the maximum number of files that a system can contain. Specialized disk subsystem that uses large cache sizes to decrease the amount of time the CPU must wait for an I/O request to be processed. Communications protocol that reports errors in datagram processing between networked nodes. Part of the Internet (IP) suite of protocols. Suite of network protocols that offer connectionless-mode network service. Addressable input/output unit, such as a disk device.
I/O Device
K
Kernel Configuration that is read when the computer is powered up. It is responsible for interacting with the hardware of the computer. The kernel manages memory, controls user access, maintains file systems, handles interrupts and errors, performs input and output services, and allocates the resources of the computer. 1024 bytes.
Kilobyte (K)
L
Link Load Balancing A connection between two ports. Distributes the I/O workload across all paths. Static load balancing assigns different devices to different physical paths, so that all paths are used for one or more devices. Dynamic load balancing distributes the workload over all the paths that the devices share, and make all paths equally burdened from moment to moment. Local Area Network (LAN) Local Backup Minimum of two network nodes communicating through a physical medium over a distance of less than 3 kilometers. Celerra File Server backup procedure that creates a backup copy of a file system or incremental changes to a file system on a tape library
g-9
Glossary
unit that is locally attached to a Data Mover. See also Manual Local Backup and Automated Local Backup. Logical Device One or more physical devices or partitions managed by the storage controller as a single logical entity. Logical devices aggregated and managed at a higher level by a volume manager are referenced as logical volumes rather than logical devices. Last part of a SCSI address. LUNs are numbered 0 through 7.
M
Management Information Base (MIB) Manual Local Backup Database controlled by SNMP. The MIB holds information about all resources managed by a network management system.
Non-automated procedure in which an operator enters commands manually or using a script. During the backup operation, single Data Movers or multiple Data Movers in parallel are backed up (fully or incrementally) to one or more locally-attached tape drives. Each Data Mover requires its own tape drive, attached to one of the Data Movers SCSI ports. Data goes directly to the Celerra File Server-attached tape drive and does not traverse the network. Also called simple local backup. Any of a variety of physical devices, such as the disk surface on which data is stored, the physical cable connecting nodes to form a network, etc. (Medium is the singular form.) The media-specific access control protocol within IEEE802 specifications. 106 bytes. Data containing structural information (such as access methods) about itself. Group of disk partitions accessed as a single partition. This is made possible by concatenating or striping the physical devices.
Media
Meta Drive
g-10
Glossary
Meta Volume
Concatenation of volumes, which can consist of disk, slice, or stripe volumes. Network operating over an area of at least 50 kilometers at approximately 100MB/s. Logical volume with all data recorded twice, once on each of two different physical devices. Method by which the Symmetrix system maintains two identical copies of a designated volume on separate disks. Each volume is automatically updated during a write operation. If one disk device fails, the Symmetrix system automatically uses the other disk device. In combination with NFS, mount attaches to a subdirectory of a remote system over a dummy directory on the local machine. This protocol allows clients to mount or unmount file systems for access through NFS. Mount is accessible over UDP or TCP. Protocol that allows clients to mount or unmount file systems for access through NFS. Mount is accessible over UDP or TCP.
Mirroring
Mount
Mount v2/v3
N
NDMP NetBIOS See Network Data Management Protocol. Network Basic Input/Output System. A network programming interface and protocol developed for IBM personal computers. Combination of devices, cabling, and software that make up a communication infrastructure. See Remote Backup. A network protocol designed for the backup and retrieval of data. It is an open standard protocol for enterprise-wide backup of heterogeneous network-attached storage. Version 1 supports locally-attached tape backup; version 2 supports network-attached tape backup. Network communication service that uses NDMP and contains its own backup and retrieval utilities.
Network
g-11
Glossary
A distributed file system that provides transparent access to remote disks. NFS allows all systems on the network to share a single copy of the directory (the alternative involves duplicating common directories on every system). Web NFS enables this same functionality to occur over the Internet. Service whose primary purpose in SNFS is to convert hostnames to IP addresses or IP addresses to hostnames. Insertable circuit board that provides network communication capabilities to and from a computer system. Allows file and byte-range locking by clients.
Network Information Service (NIS) Network Interface Card (NIC) Network Lock Manager (NLM) nsap Network Status Monitor (NSM)
Network source address point. Protocol that allows clients and servers to monitor each other s status and be aware of reboots.
P
Physical Volume Port Addressable disk on the SCSI bus. On a computer, it is a physical connecting point to which a device is attached. Accessible through UDP and TCP. Allows clients access to the services registered on the Data Mover. Master domain controller that processes all the users and groups that are connected to a domain. Process of moving data from a track on the disk device to cache slot. Standard defined between the client and the user that determines how information is transferred and interpreted over a network.
Portmapper v2
R
RAID (Redundant Array of Independent Disks) Data is stored on multiple magnetic or optical disk drives to increase output performance and storage capacities and to provide varying degrees of redundancy and fault tolerance. Instead of storing valuable data on a single hard disk that could fail at any time, RAID
g-12
Glossary
makes sure a backup copy of all information always exists by spreading data among multiple hard disks. Read Hit Read Miss Redundant Data requested by the read operation is in cache. Data requested by the read operation is not in cache. Backup arrays, drives, disks or power supplies that duplicate functions performed elsewhere. Celerra File Server backup procedure that is initiated from a remote workstation. This procedure creates a backup copy of a file system or incremental changes to a file system on a tape library unit that may be attached to a server other than the originating Data Mover. Depending on the type of remote backup, the data may or may not traverse the network. See also, Automated Local Backup and Restore and Automated Network Backup and Restore. Device that transfers information between networks, and determines the most efficient route for it to follow. Allows a server to query for the best route to reach an internet address.
Remote Backup
Router
S
Scrubbing Process of reading, checking the error correction bits, and writing corrected data back to the source. Small Computer System Interface. The standard set of protocols for host computers communicating with attached peripherals. SCSI allows connection to as many as six peripherals including printers, scanners, hard drives, zip drives, and CD-ROM drives. Card in the Symmetrix subsystem that provides the physical interface between the disk director and the disk devices. A parallel bus that carries data and control signals from SCSI devices to an SCSI controller. Authenticates users to use resources on the network.
SCSI
SCSI Adapter
SCSI bus
g-13
Glossary
Descriptor, associated with a file that includes the owner and the group SID for the ACL. Unique identifier that defines a user or group on Windows NT. Each user or group has its own SID. Back-end device that handles requests made by hosts connected through a network. Protocol used by CIFS that has been enhanced for use on the Internet to request file, print, and communication services from a Data Mover over the network. CIFS uses SMB to provide secure file access and transfer to many types of hosts such as LANs, Intranets, and the Internet. The SMB protocol is an open, cross-platform protocol for distributed file sharing, and it is supported by Windows 95, Windows 98, and Windows NT. Data requested is not in cache, but is in the process of being fetched. See Manual Local Backup. An application protocol developed in the mid 1980s to manage network communications in the Internet Protocol suite. SNMP controls the MIB database. It is most commonly employed using TCP/IP protocols. Logical piece or specified area of a volume used to create smaller, more manageable units of storage. User-defined state that indicates the mode of a physical I/O path. A standby path is held in reserve against failure. No I/O is sent over a standby path while the power device can access an available active path. Physical device that can attach to a SCSI device, which in turn connects to the SCSI bus. A printed-circuit board with processor memory modules and control logic that manages the FC4700-2 storage system I/O between the server Fibre Channel adapter and the disk modules. Series of connected disk devices sharing the same disk director. Also a contiguous series of alphanumeric characters.
Server
Short Miss Simple Local Backup Simple Network Management Protocol (SNMP)
Slice Volume
Standby
Storage Device
String
g-14
Glossary
Stripe Volume
Arrangement of volumes that appear as a single volume. Allows for stripe units, which cut across the volume and are addressed in an interlaced manner. Stripe Volumes make load balancing possible. A network device that selects a path or circuit for sending a data between destinations. Also, a Fibre Channel device used to connect devices (e.g., computer servers and storage systems) into a Fibre Channel fabric.
Switch
T
Tape Library Unit Physical device that contains and manages multiple magnetic tape units accessible as a unit. Middle part of a SCSI address. Target numbers are assigned from 0 through 7. As the Internet standard protocol for remote terminal connection, Telnet allows a user at one site to interact with a remote device or system that expects terminal-mode traffic. 2 to the 40th power (1,099,511,627,776) bytes, or approximately 1 trillion bytes. Measured as TB. Sequential flow of control. A thread consists of address space, a stack, local variables, and global variables. In computers, it is a measurement of the amount of work that can be processed within a set time period. In networking, it is a measurement of the amount of data that can be successfully transferred with a set time period. Transport protocol that provides connection-oriented transport services in the Internet suite of protocols. TCP/IP is used in network communications routing and data transfer, and it is the accepted standard for UNIX-based operating systems and the Internet.
Target
Telnet
Terabyte
Thread
Throughput
U
UNIX File System (UFS) Standard UNIX File System.
g-15
Glossary
UxFS
High-performance, Celerra File Server-default file system, based on traditional Berkeley UFS, enhanced with 64-bit support, metadata logging for high availability and several performance enhancements.
V
Volume A virtual disk into which a file system, database management system or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives. Capability of optimizing disk storage with features that provide greatest accessibility, capacity, and reliability for the client.
Volume Management
W
Wide Area Network (WAN) Window Internet Name Service (WINS) Private or public network that covers a wide geographical area.
A name resolution system that determines the IP address associated with a particular network computer. WINS provides the mapping between the machine name and the Internet address, allowing Microsoft networking to function over TCP/IP networks. Grouping of computers sharing a common security scheme. Existence of room in cache for the data presented by the write operation. Lack of room in cache for the data presented by the write operation.
Write Miss
Z
Zones Several devices are grouped by function or by location. All devices connected to a connectivity product may include configuration of one or more zones. Devices in the same zone can see each other; devices in different zones cannot.
g-16
Index
A
Access control level creating accounts 9-10 defining privileges 9-10 nas_acl 9-10 local 3-5 remote 3-5 restricting 6-6 rights enabling 6-14 Access Control List (ACL) 9-9 ACL 9-9 access control values 9-11 ACL Table creating 9-10 example 9-13 Adapter, Fibre Channel 1-11, 1-19, 1-22 Addressing within a meta volume 5-10 Advisory lock 6-6 ANSI Fiber Channel Class 3 service 1-21 Architecture hardware 1-8 independent Data Mover/Control Station 1-5 Assigning an IP address 4-2 Authentication NFS user 6-3 PC clients NFS 6-11 using DNS 4-7 Automount map creating 8-2
B
Benefits, Celerra File Server 1-5 Business continuance volumes (BCVs), configuration 5-13
C
Call Home support 1-20 CallHome 1-26 Call-In support 1-20 Capacity checking file system 8-6 Data Mover 8-6 volume 9-5 Capacity (fan-out) topology 1-23 CD-ROM drive 1-20 Celerra cabinet emergency shutdown 3-8 planned power down 3-6 powering up 3-2 Celerra File Server data availability 1-5 display panel 1-11 environment 1-27 expanding storage capacity 5-11 features and benefits 1-5 Fibre Channel switches 1-25 functionality 1-11 hardware 1-15 hardware architecture 1-11 high availability 1-11 managing 1-12 network and storage requirements 2-1 number of Data Movers 1-6
Celerra File Server Basic Administration Guide
i-1
Index
operations 1-11 overview 1-2 performance and capacity 1-4 protocols 1-28 self-diagnostic/self-reporting capability 1-6 software components 1-26 Celerra File Server cabinet 1-8 internal view 1-15 single cabinet model 1-8 two-cabinet models 1-8 Celerra File Server interface 1-11 Celerra File Server Manager GUI 1-29 requirements 1-30 Celerra Monitor 1-29 minimum configuration 1-31 software 1-12 CIFS 1-7, 1-27, 1-28 compatibility with NFS 1-7 configuring clients 1-36, 6-2 deleting service 9-2 Class 3, switched fabric 1-25 COMM board 1-20 Command Line Interface character parameters 3-4 local access 3-4 logging in 3-5 remote access 3-4 Common Internet File System See Compatibility, NFS/CIFS 1-7 Component repair, nondisruptive 1-6 Configuring standby Data Movers 7-2 time services 4-9 Connectivity, Fibre Channel 1-25 Console Multiplexer 1-20 Consolidation (fan-in) topology 1-23 Control Station dual 1-5 dual configuration 7-15 enabling daemons 10-3 front view 1-16 halting locally 10-7 nas database 10-1 primary 10-5, 10-7 primary and secondary 1-15 rebooting locally 10-5
i-2
remotely halting 10-8 rebooting 10-6 secondary 10-5, 10-7 slots (component positions) 1-17 software 1-26 Creating automount map 8-2 file system 5-15 stripe volume 5-7 volume configurations 5-2 Customer support B-3 Customer Support Center B-5
D
Daemon Control Station 10-3 httpd 10-4 nas 10-4 ntp 10-3 Data Access in Real Time (DART) software 1-26 Data availability 2-4 Data Mover characteristics 1-17 checking free space 8-6 checking log files 11-11 checking system capacity 8-6 configuring for standby operations 7-2 creating a mount point 5-16 current log file 11-11 determining the number required 2-4 error messages 11-7 failover 2-4 free space 8-6 front view 1-18 halting 9-4 mapping volumes to 2-5 operations 1-11 reboot 9-4 restoring from standby 7-12 slots (component positions) 1-19 software 1-26 standby 2-4 troubleshooting 11-7 typical configuration 2-6 Database
Index
Control Station 10-1 Deleting CIFS service 9-2 disk volume 9-7 file system 8-14 meta volume 9-7 slice volume 9-7 stripe volume 9-7 Dial-in support 1-26 Discovering SCSI devices 9-2 Disk volume deleting 9-7 renaming 9-6 Displaying mounted file systems 8-4 DNS authentication 4-7 server 4-7 Dual Control Station 1-5, 1-26, 7-15
E
E_Port, Fibre Channel 1-22 Emergency shutdown 3-8 Enabling access rights 6-14 EPO Box 1-19, 3-2 Error messages Data Mover 11-7 file system 11-5 post-install 11-2 server_export 6-9 volume 11-4 Ethernet fast 1-6 gigabit 1-6 Ethernet channel 1-19 Export options NFS 6-8 Extending file system 8-7 meta volume 9-5
F
F_Port, Fibre Channel 1-22 Failover example 7-5
policy types (table) 7-8 Fan-in topology 1-23 Fast Ethernet 1-6, 4-2 Features, Celerra File Server 1-5 Fibre Channel 1-21 adapter 1-11, 1-15, 1-19, 1-22 connectivity 1-25 on Celerra File Server 1-22 port types 1-22 standards 1-22 supported switches 1-25 switched environment 1-22 switched fabric 1-22, 1-24 topology 1-22 tutorial (url) 1-25 zones 1-24 Fibre Channel Director (FA) N_port 1-23 File locking NFS 6-6 File system capacity 2-3 checking capacity 8-6 creating 5-15 deleting 8-14 displaying mounted 8-4 error messages 11-5 extending 8-7 inodes 8-6 mirroring 5-13 mounting 6-6 permanent mount 6-7 providing user access rights 6-11 read-only option 6-6 read-write option 6-6 renaming 8-9 troubleshooting 11-5 unexporting 6-14 unmount all 8-5 File Transfer Protocol See FTP Flat panel display 1-11 Free space checking Data Mover 8-6 FTP over TCP 1-12 use with Celerra File Server 1-28
i-3
Index
G
G_Port, Fibre Channel 1-22 GIDs 32-bit creating 8-14 overview 8-12 restrictions 8-13 setting 8-14 Gigabit Ethernet 1-6
M
Managing Celerra File Server 1-12 system 9-2 Mapping volumes to file systems 2-5 Meta volume configuring 5-9 deleting 9-7 extending 9-5 renaming 9-6 Mirrored file system 5-13 Modem connections 1-20 Monitoring network statistics 11-12 Monitoring server operations with SNMP MIB-II 1-12 Mount point creating 5-16 Mounting a file system path 6-6
H
Halting Control Station locally 10-7 remotely 10-8 Data Mover 9-4 Hardware architecture 1-8 multi-cabinet enclosure 1-10 single enclosure 1-8 High availability and Data Movers 1-11 httpd daemons 10-4
N
N_Port, Fibre Channel 1-22 nas Daemon 10-4 nas database backup 10-1 Control Station 10-1 nas_acl 9-10 Network administration monitoring statistics 11-12 interfaces assigning an IP address 4-2 Network and storage requirements 2-1 Network file sharing protocols 1-27 Network File System See NFS Network topologies capacity 2-4 NFS 1-7, 1-28 access for PC clients 6-11 and CIFS compatibility 1-4, 1-7 configuration overview 6-3 configuring clients 1-36, 6-2 environment 6-3 export options 6-8 protocol 6-3 standards supported 1-12
I
I/O access through Data Movers 2-5 Independent Data Mover/Control Station architecture 1-5 Initially 4-2 Inodes 8-6 Installation configuring Data Movers 2-4 configuring standby Data Movers 2-4 mapping volumes to Data Movers 2-5 storage requirements 2-3 IP address, internal Ethernet NIC 1-17, 1-19
L
Linux operating system 1-26 Local access Command Line Interface 3-4 logging in 3-5 Lock advisory (NFS) 6-6 locking, file 6-6 Logging in 3-5 Logical paths 1-22
i-4
Index
typical configuration 6-4 user authentication 6-3 NIC characteristics 4-2 types 4-2 Nondisruptive component repair, Celerra 1-6 NTP daemon 10-3
stripe volume 9-6 Restoring from standby 7-12 Rules and restrictions standby Data Movers 7-2
S
SCSI connection to Data Movers 1-11, 1-12 Secondary Control Station 1-15, 10-5, 10-7 Self-diagnostic/self-reporting capability 1-6 Server connections to the network 1-6 Server parameters 9-19 Single adapter, capacity topology 1-23 Slice volume 5-3 deleting 9-7 renaming 9-6 SNMP MIB-II monitoring server operations 1-12 Software components 1-26 Standby restoring 7-12 Standby Data Mover 2-4 failover policies 7-8 feature 1-5 rules and restrictions 7-2 Storage needs calculating 2-3 Storage protection FC4700-2 2-3 Symmetrix 2-3 Stripe volume 5-5 creating 5-7 deleting 9-7 renaming 9-6 Switched fabric configuring with Celerra 1-25 Fibre Channel 1-22, 1-24 System management 9-2 System parameters 9-15
P
Parameters file format 9-15 server 9-19 system 9-15 PC clients providing NFS access to 6-11 Planned power down 3-6 Power supplies 1-19 Primary Control Station 1-15, 10-5, 10-7 Privileges access control levels 9-10 nas_acl 9-10 Protocol NFS 6-3
R
RAID storage usage 2-3 Reason codes Control Station 10-5, 10-7 Reboot/recovery, Celerra File Server 1-5 Rebooting Control Station locally 10-5 remotely 10-6 Data Movers 9-4 Recommended fan-in ratio 1-24 Redundant components 1-6 Redundant connections, Fibre Channel 1-25 Remote access Command Line Interface 3-4 Renaming disk volume 9-6 file system 8-9 meta volume 9-6 slice volume 9-6
T
Technical support B-3 Time services configuring 4-9 Topology Fibre channel 1-22
Celerra File Server Basic Administration Guide
i-5
Index
physical and logical 1-22 Troubleshooting CIFS 11-4 Data Mover 11-7 file system 11-5 volume 11-4
U
Unexport limitations 6-14 Unexporting file systems 6-14 Unmounting all file systems 8-5 User access rights file system 6-11 UxFS 1-27, 1-28
V
Volume capacity, checking 9-5 configurations creating 5-2 error messages 11-4 management creating a stripe volume 5-7 troubleshooting 11-4 Volumes BCV configuration 5-13 expanding 5-11 meta volume addressing 5-10 meta volume configuration 5-9 slice volume configuration 5-3, 5-6 stripe volume performance 5-5 types (table) 5-2
Z
Zones, Fibre Channel 1-24
i-6