Data Domain Config
Data Domain Config
Version 6.2
Administration Guide
302-005-407
REV. 04
March 2020
Copyright © 2010-2020 Dell Inc. or its subsidiaries All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Preface 15
Destroying DD Boost.......................................................................................333
Configuring DD Boost-over-Fibre Channel...................................................... 333
Enabling DD Boost users.................................................................... 333
Configuring DD Boost........................................................................ 334
Verifying connectivity and creating access groups.............................335
Using DD Boost on HA systems.......................................................................337
About the DD Boost tabs.................................................................................337
Settings............................................................................................. 338
Active Connections............................................................................ 338
IP Network.........................................................................................339
Fibre Channel..................................................................................... 339
Storage Units..................................................................................... 339
Troubleshooting D2M.........................................................................437
Additional D2M troubleshooting......................................................... 438
Using collection replication for disaster recovery with SMT............................439
As part of an effort to improve its product lines, Data Domain periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features, software updates, software
compatibility guides, and information about Data Domain products, licensing, and service.
Contact your technical support professional if a product does not function properly or does not
function as described in this document.
Note: This document was accurate at publication time. Go to Online Support (https://
support.emc.com) to ensure that you are using the latest version of this document.
Purpose
This guide explains how to manage the Data Domain® systems with an emphasis on procedures
using the Data Domain System Manager (DD System Manager), a browser-based graphical user
interface (GUI). If an important administrative task is not supported in DD System Manager, the
Command Line Interface (CLI) commands are described.
Note:
l DD System Manager was formerly known as the Enterprise Manager.
l In some cases, a CLI command may offer more options than those offered by the
corresponding DD System Manager feature. See the Data Domain Operating System
Command Reference Guide for a complete description of a command and its options.
Audience
This guide is for system administrators who are familiar with standard backup software packages
and general backup administration.
Related documentation
The following Data Domain system documents provide additional information:
l Installation and setup guide for your system, for example, Data Domain DD9300 System
Installation Guide
l Data Domain Hardware Features and Specifications Guide
l Data Domain Operating System USB Installation Guide
l Data Domain Operating System DVD Installation Guide
l Data Domain Operating System Release Notes
l Data Domain Operating System Initial Configuration Guide
l Data Domain Security Configuration Guide
l Data Domain Operating System High Availability White Paper
l Data Domain Operating System Command Reference Guide
l Data Domain Operating System MIB Quick Reference
l Data Domain Operating System Offline Diagnostics Suite User's Guide
l Field replacement guides for your system components, for example, Field Replacement Guide,
Data Domain DD4200, DD4500, and DD7200 Systems, IO Module and Management Module
Replacement or Upgrade
Note: A note identifies information that is incidental, but not essential, to the topic. Notes can
provide an explanation, a comment, reinforcement of a point in the text, or just a related point.
Typographical conventions
Data Domain uses the following type style conventions in this document:
Table 1 Typography
Monospace italic Highlights a variable name that must be replaced with a variable
value
Monospace bold Indicates text for user input
Product information
For documentation, release notes, software updates, or information about Data Domain
products, go to Online Support at https://support.emc.com.
Technical support
Go to Online Support and click Service Center. You will see several options for contacting
Technical Support. Note that to open a service request, you must have a valid support
agreement. Contact your sales representative for details about obtaining a valid support
agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of
the user publications. Send your opinions of this document to: [email protected].
l Revision history.....................................................................................................................20
l Data Domain system overview...............................................................................................20
l Data Domain system features................................................................................................ 21
l Storage environment integration...........................................................................................26
Revision history
The revision history lists the major changes to this document to support DD OS Release 6.2.
04 (6.2.0) March 2020 This revision includes the following corrections and
clarifications:
l Removed an unsupported US location for configuring
a cloud unit for Google.
l Add the CLI steps to register the system with an
ESRS gateway.
l Added additional information about snapshot
retention after breaking an MTree replication
context.
l Added additional information about licensing
requirements for storage migration.
03 (6.2.0) April 2019 This revision includes information about the Automatic
Retention Lock feature.
02 (6.2.0) February 2019 This revision includes information about the Single Sign-
On (SSO) feature, and corrections to the steam counts
for DD2200 system with 8 GB of memory .
01 (6.2.0) December 2018 This revision includes information about these new
features:
l Configuring mail server credentials as part of the DD
SM Configuration Wizard.
l DD300 8 TB to 16 TB capacity expansion.
l Secure LDAP authentication.
l Active Directory connection diagnosis tool.
l Saving coredump files to a USB drive.
l SMB Change Notify.
l Trusted Domain offline access.
l DD Cloud Tier support for Alibaba and Google Cloud
Platform cloud providers.
Systems consist of appliances that vary in storage capacity and data throughput. Systems are
typically configured with expansion enclosures that add storage space.
Data integrity
The DD OS Data Invulnerability Architecture™ protects against data loss from hardware and
software failures.
l When writing to disk, the DD OS creates and stores checksums and self-describing metadata
for all data received. After writing the data to disk, the DD OS then recomputes and verifies
the checksums and metadata.
l An append-only write policy guards against overwriting valid data.
l After a backup completes, a validation process examines what was written to disk and verifies
that all file segments are logically correct within the file system and that the data is identical
before and after writing to disk.
l In the background, the online verify operation continuously checks that data on the disks is
correct and unchanged since the earlier validation process.
l Storage in most Data Domain systems is set up in a double parity RAID 6 configuration (two
parity drives). Additionally, most configurations include a hot spare in each enclosure, except
the DD1xx series systems, which use eight disks. Each parity stripe uses block checksums to
ensure that data is correct. Checksums are constantly used during the online verify operation
and while data is read from the Data Domain system. With double parity, the system can fix
simultaneous errors on as many as two disks.
l To keep data synchronized during a hardware or power failure, the Data Domain system uses
NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully
charged batteries (the typical state) can retain data for a period of hours, which is determined
by the hardware in use.
l When reading data back on a restore operation, the DD OS uses multiple layers of consistency
checks to verify that restored data is correct.
l When writing to SSD cache, the DD OS:
n Creates an SL checksum for every record stored in the cache to detect corruption to cache
data. This checksum is validated for every cache read.
n Treats corruption to cache data as a cache miss and does not result in data loss. Therefore
cache clients cannot store the latest copy of the data without some other backup
mechanism such as NVRAM or HDD.
n Removes the need for inline verification of cache writes, as cache clients can detect and
handle misdirected or lost writes. This also saves I/O bandwidth.
n Removes the need for SSD scrubbing of the the file system is, as the data in the cache
keeps changing frequently and is already scrubbed by SAS Background Media Scan (BMS).
Data deduplication
DD OS data deduplication identifies redundant data during each backup and stores unique data just
once.
The storage of unique data is invisible to backup software and independent of data format. Data
can be structured, such as databases, or unstructured, such as text files. Data can derive from file
systems or from raw volumes.
Typical deduplication ratios are 20-to-1, on average, over many weeks. This ratio assumes there
are weekly full backups and daily incremental backups. A backup that includes many duplicate or
similar files (files copied several times with minor changes) benefits the most from deduplication.
Depending on backup volume, size, retention period, and rate of change, the amount of
deduplication can vary. The best deduplication happens with backup volume sizes of at least
10 MiB (MiB is the base 2 equivalent of MB).
To take full advantage of multiple Data Domain systems, a site with more than one Data Domain
system must consistently backup the same client system or set of data to the same Data Domain
system. For example, if a full back up of all sales data goes to Data Domain system A, maximum
deduplication is achieved when the incremental backups and future full backups for sales data also
go to Data Domain system A.
Restore operations
File restore operations create little or no contention with backup or other restore operations.
When backing up to disks on a Data Domain system, incremental backups are always reliable and
can be easily accessed. With tape backups, a restore operation may rely on multiple tapes holding
incremental backups. Also, the more incremental backups a site stores on multiple tapes, the more
time-consuming and risky the restore process. One bad tape can kill the restore.
Using a Data Domain system, you can perform full backups more frequently without the penalty of
storing redundant data. Unlike tape drive backups, multiple processes can access a Data Domain
system simultaneously. A Data Domain system allows your site to offer safe, user-driven, single-file
restore operations.
High Availability
The High Availability (HA) feature lets you configure two Data Domain systems as an Active-
Standby pair, providing redundancy in the event of a system failure. HA keeps the active and
standby systems in sync, so that if the active node were to fail due to hardware or software issues,
the standby node can take over services and continue where the failing node left off.
The HA feature:
l Supports failover of backup, restore, replication and management services in a two-node
system. Automatic failover requires no user intervention.
l Provides a fully redundant design with no single point of failure within the system when
configured as recommended.
l Provides an Active-Standby system with no loss of performance on failover.
l Provides failover within 10 minutes for most operations. CIFS, DD VTL, and NDMP must be
restarted manually.
Note: Recovery of DD Boost applications may take longer than 10 minutes, because Boost
application recovery cannot begin until the DD server failover is complete. In addition,
Boost application recovery cannot start until the application invokes the Boost library.
Similarly, NFS may require additional time to recover.
l Supports ease of management and configuration through DD OS CLIs.
l Provides alerts for malfunctioning hardware.
l Preserves single-node performance and scalability within an HA configuration in both normal
and degraded mode.
l Supports the same feature set as stand-alone DD systems.
Note: DD Extended Retention and vDisk are not supported.
l Supports systems with all SAS drives. This includes legacy systems upgraded to systems with
all SAS drives.
Note: The Hardware Overview and Installation Guides for the Data Domain systems that
support HA describes how to install a new HA system. The Data Domain Single Node to HA
Upgrade describes how to upgrade an existing system to an HA pair.
l Does not impact the ability to scale the product.
l Supports nondisruptive software updates.
HA is supported on the following Data Domain systems:
l DD6800
l DD9300
l DD9500
l DD9800
HA architecture
HA functionality is available for both IP and FC connections. Both nodes must have access to the
same IP networks, FC SANs, and hosts in order to achieve high availability for the environment.
Over IP networks, HA uses a floating IP address to provide data access to the Data Domain HA pair
regardless of which physical node is the active node.
Over FC SANs, HA uses NPIV to move the FC WWNs between nodes, allowing the FC initiators to
re-establish connections after a failover.
Figure 1 on page 24 shows the HA architecture.
Figure 1 HA architecture
configuration changes after initial configuration, display system and component status, and
generate reports and charts.
Note: Some systems support access using a keyboard and monitor attached directly to the
system.
Licensed features
Feature licenses allow you to purchase only those features you intend to use. Some examples of
features that require licenses are DD Extended Retention, DD Boost, and storage capacity
increases.
Consult with your sales representative for information on purchasing licensed features.
Data Domain ARCHIVESTORE Licenses Data Domain systems for archive use,
ArchiveStore such as file and email archiving, file tiering, and
content and database archiving.
Data Domain Boost DDBOOST Enables the use of a Data Domain system with
the following applications: Avamar, NetWorker,
Oracle RMAN, Quest vRanger, Symantec Veritas
NetBackup (NBU), and Backup Exec. The
managed file replication (MFR) feature of DD
Boost also requires the DD Replicator license.
Data Domain Cloud CLOUDTIER- Enables a Data Domain system to move data from
Tier CAPACITY the active tier to low-cost, high-capacity object
storage in the public, private, or hybrid cloud for
long-term retention.
Data Domain I/OS I/OS An I/OS license is required when DD VTL is used
(for IBM i operating to backup systems in the IBM i operating
environments) environment. Apply this license before adding
virtual tape drives to libraries.
Data Domain Shelf CAPACITY- Enables a Data Domain system to expand the
Capacity-Active Tier ACTIVE active tier storage capacity to an additional
enclosure or a disk pack within an enclosure.
Data Domain Shelf CAPACITY- Enables a Data Domain system to expand the
Capacity-Archive Tier ARCHIVE archive tier storage capacity to an additional
enclosure or a disk pack within an enclosure.
Data Domain Storage STORAGE- Enables migration of data from one enclosure to
Migration MIGRATION-FOR- another to support replacement of older, lower-
DATADOMAIN- capacity enclosures.
SYSTEMS
Data Domain Virtual VTL Enables the use of a Data Domain system as a
Tape Library (DD virtual tape library over a Fibre Channel network.
VTL) This license also enables the NDMP Tape Server
feature, which previously required a separate
license.
All backup applications can access a Data Domain system as either an NFS or a CIFS file system on
the Data Domain disk device.
The following figure shows a Data Domain system integrated into an existing basic backup
configuration.
Figure 2 Data Domain system integrated into a storage environment
1. Primary storage
2. Ethernet
3. Backup server
4. SCSI/Fibre Channel
5. Gigabit Ethernet or Fibre Channel
6. Tape system
7. Data Domain system
8. Management
9. NFS/CIFS/DD VTL/DD Boost
10. Data Verification
11. File system
12. Global deduplication and compression
13. RAID
As shown in Figure 2 on page 27, data flows to a Data Domain system through an Ethernet or Fibre
Channel connection. Immediately, the data verification processes begin and are continued while
the data resides on the Data Domain system. In the file system, the DD OS Global Compression™
algorithms dedupe and compress the data for storage. Data is then sent to the disk RAID
subsystem. When a restore operation is required, data is retrieved from Data Domain storage,
decompressed, verified for consistency, and transferred via Ethernet to the backup servers using
Ethernet (for NFS, CIFS, DD Boost), or using Fiber Channel (for DD VTL and DD Boost).
The DD OS accommodates relatively large streams of sequential data from backup software and is
optimized for high throughput, continuous data verification, and high compression. It also
accommodates the large numbers of smaller files in nearline storage (DD ArchiveStore).
Data Domain system performance is best when storing data from applications that are not
specifically backup software under the following circumstances.
l Data is sent to the Data Domain system as sequential writes (no overwrites).
l Data is neither compressed nor encrypted before being sent to the Data Domain system.
l A hostname (http://dd01)
l An IP address (http://10.5.50.5)
Note: DD System Manager uses HTTP port 80 and HTTPS port 443. If your Data
Domain system is behind a firewall, you may need to enable port 80 if using HTTP, or
port 443 if using HTTPS to reach the system. The port numbers can be easily changed if
security requirements dictate.
Note: If the Data Domain System Manager is unable to launch from any web browser,
the displayed error message is "The GUI Service is temporarily unavailable. Please
refresh your browser. If the problem persists, please contact Data Domain support for
assistance." SSH can be used to login to the Data Domain system and can run all
commands.
If you have not upgraded the DD OS but still encounter this GUI error, use the following
procedure:
a. Close the web browser session on the Data Domain system with the reported error.
b. Run these commands in sequence:
l adminaccess disable http
l adminaccess disable https
l adminaccess enable http
l adminaccess enable https
c. Wait 5 minutes to allow the http and https services to start completely.
d. Open a web browser, and connect to Data Domain System Manager.
If you see this GUI issue after a DD OS upgrade, use the following procedure:
a. Close the web browser session on the Data Domain system with the reported error.
b. Run these commands in sequence:
l adminaccess disable http
l adminaccess disable https
l adminaccess certificate generate self-signed-cert
l adminaccess enable http
l adminaccess enable https
a. Wait 5 minutes to allow the http and https services to start completely.
b. Open a web browser, and connect to Data Domain System Manager.
Page elements
The primary page elements are the banner, the navigation panel, the information panels, and
footer.
Figure 3 DD System Manager page components
1. Banner
2. Navigation panel
3. Information panels
4. Footer
Banner
The DD System Manager banner displays the program name and buttons for Refresh, Log Out,
and Help.
Navigation panel
The Navigation panel displays the highest level menu selections that you can use to identify the
system component or task that you want to manage.
The Navigation panel displays the top two levels of the navigation system. Click any top level title
to display the second level titles. Tabs and menus in the Information panel provide additional
navigation controls.
Information panel
The Information panel displays information and controls related to the selected item in the
Navigation panel. The information panel is where you find system status information and configure
a system.
Depending on the feature or task selected in the Navigation panel, the Information panel may
display a tab bar, topic areas, table view controls, and the More Tasks menu.
Tab bar
Tabs provide access to different aspects of the topic selected in the Navigation panel.
Topic areas
Topic areas divide the Information panel into sections that represent different aspects of the topic
selected in the Navigation panel or parent tab.
For high-availability (HA) systems, the HA Readiness tab on the System Manager dashboard
indicates whether the HA system is ready to fail over from the active node to the standby node.
You can click on HA Readiness to navigate to the High Availability section under HEALTH.
Working with table view options
Many of the views with tables of items contain controls for filtering, navigating, and sorting the
information in the table.
How to use common table controls:
l Click the diamond icon in a column heading to reverse the sort order of items in the column.
l Click the < and > arrows at the bottom right of the view to move forward or backward through
the pages. To skip to the beginning of a sequence of pages, click |<. To skip to the end, click
>|.
l Use the scroll bar to view all items in a table.
l Enter text in the Filter By box to search for or prioritize the listing of those items.
l Click Update to refresh the list.
l Click Reset to return to the default listing.
More Tasks menu
Some pages provide a More Tasks menu at the top right of the view that contains commands
related to the current view.
Footer
The DD System Manager footer displays important information about the management session.
The banner lists the following information.
l System hostname.
l DD OS version
l Selected system model number.
l User name and role for the current logged in user.
Help buttons
Help buttons display a ? and appear in the banner, in the title of many areas of the Information
panel, and in many dialogs. Click the help button to display a help window related to the current
feature you are using.
The help window provides a contents button and navigation button above the help. Click the
contents button to display the guide contents and a search button that you can use to search the
help. Use the directional arrow buttons to page through the help topics in sequential order.
License page
The License page displays all installed licenses. Click Yes to add, modify, or delete a license, or
click No to skip license installation.
License Configuration
The Licenses Configuration section allows you add, modify or delete licenses from a license file.
Data Domain Operating System 6.0 and later supports ELMS licensing, which allows you to include
multiple features in a single license file upload.
When using the Configuration Wizard on a system without any licenses configured on it, select the
license type from the drop-down, and click the ... button. Browse to the directory where the
license file resides, and select it for upload to the system.
Item Description
Add Licenses Select this option to add licenses from a license file.
Replace Licenses If licenses are already configured the Add Licenses selection
changes to Replace Licenses. Select this option to replace
the licenses already added.
Delete Licenses Select this option to delete licenses already configured on the
system.
Network
The Network section allows you to configure the network settings. Click Yes to configure the
network settings, or click No to skip network configuration.
Item Description
Obtain Settings using DHCP Select this option to specify that the system collect network
settings from a Dynamic Host Control Protocol (DHCP)
server. When you configure the network interfaces, at least
one of the interfaces must be configured to use DHCP.
Manually Configure Select this option to use the network settings defined in the
Settings area of this page.
Domain Name Specifies the network domain to which this system belongs.
Item Description
Default IPv4 Gateway Specifies the IPv4 address of the gateway to which the
system will forward network requests when there is no route
entry for the destination system.
Default IPv6 Gateway Specifies the IPv6 address of the gateway to which the
system will forward network requests when there is no route
entry for the destination system.
Item Description
Netmask Specifies the network mask for this system. To configure the
network mask, you must set DHCP to No.
Link Displays whether the Ethernet link is active (Yes) or not (No).
Item Description
Obtain DNS using DHCP. Select this option to specify that the system collect DNS IP
addresses from a Dynamic Host Control Protocol (DHCP)
server. When you configure the network interfaces, at least
one of the interfaces must be configured to use DHCP.
Manually configure DNS list. Select this option when you want to manually enter DNS
server IP addresses.
Add (+) button Click this button to display a dialog in which you can add a
DNS IP address to the DNS IP Address list. You must select
Manually configure DNS list before you can add or delete
DNS IP addresses.
Delete (X) button Click this button to delete a DNS IP address from the DNS IP
Address list. You must select the IP address to delete before
this button is enabled. You must also select Manually
configure DNS list before you can add or delete DNS IP
addresses.
IP Address Checkboxes Select a checkbox for a DNS IP address that you want to
delete. Select the DNS IP Address checkbox when you want
to delete all IP addresses. You must select Manually
configure DNS list before you can add or delete DNS IP
addresses.
File System
The File System section allows you to configure Active and Cloud Tier storage. Each has a
separate wizard page. You can also create the File System within this section. The configuration
pages cannot be accessed if the file system is already created.
Anytime you display the File System section when the File System has not been created, the
system displays an error message. Continue with the procedure to create the file system.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
l The enclosure and disk number (in the form Enclosure
Slot, or Enclosure Pack for DS60 shelves)
Item Description
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
License Needed The licensed capacity required to add the storage to the tier.
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
l The enclosure and disk number (in the form Enclosure
Slot, or Enclosure Pack for DS60 shelves). This does not
apply to DD VE instances.
l A device number for a logical device such as those used by
DD VTL and vDisk
l A LUN
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Item Description
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
License Needed The licensed capacity required to add the storage to the tier.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Item Description
Item Description
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
l The enclosure and disk number (in the form Enclosure
Slot, or Enclosure Pack for DS60 shelves)
l A device number for a logical device such as those used by
DD VTL and vDisk
l A LUN
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
License Needed The licensed capacity required to add the storage to the tier.
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
Item Description
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
System Settings
The System Settings section allows you to configure system passwords, and email settings. Click
Yes to configure the system settings, or click No to skip system settings configuration.
Item Description
Item Description
Send Alert Notification Emails Check to configure DD System Manager to send alert
to this address notifications to the Admin email address as alert events
occur.
Send Daily Alert Summary Check to configure DD System Manager to send alert
Emails to this address summaries to the Admin email address at the end of each day.
Send Autosupport Emails to Check to configure DD System Manager to send the Admin
this address user autosupport emails, which are daily reports that
document system activity and status.
Item Description
Mail Server Specify the name of the mail server that manages emails to
and from the system.
User Name If credentials are enabled, specify the mail server username.
Send Alert Notification Emails Check to configure DD System Manager to send alert
to Data Domain notification emails to Data Domain.
DD Boost protocol
The DD Boost settings section allows you to configure the DD Boost protocol settings. Click Yes
to configure the DD Boost Protocol settings, or click No to skip DD Boost configuration.
Item Description
Storage Unit The name of your DD Boost Storage Unit. You may optionally
change this name.
User For the default DD Boost user, either select an existing user,
or select Create a new Local User, and enter their User name,
Password, and Management Role. This role can be one of the
following:
l Admin role: Lets you configure and monitor the entire
Data Domain system.
l User role: Lets you monitor Data Domain systems and
change your own password.
l Security role: In addition to user role privileges, lets you
set up security-officer configurations and manage other
security-officer operators.
l Backup-operator role: In addition to user role privileges,
lets you create snapshots, import and export tapes to, or
move tapes within a DD VTL.
l None role: Intended only for DD Boost authentication, so
you cannot monitor or configure a Data Domain system.
None is also the parent role for the SMT tenant-admin
and tenant-user roles. None is also the preferred user
type for DD Boost storage owners. Creating a new local
user here only allows that user to have the "none" role.
Item Description
Configure DD Boost over Fibre Select the checkbox if you want to configure DD Boost over
Channel Fibre Channel.
Group Name (1-128 Chars) Create an Access Group. Enter a unique name. Duplicate
access groups are not supported.
Item Description
Devices The devices to be used are listed. They are available on all
endpoints. An endpoint is the logical target on the Data
Domain system to which the initiator connects.
CIFS protocol
The CIFS Protocol settings section allows you to configure the CIFS protocol settings. Click Yes
to configure the CIFS protocol settings, or click No to skip CIFS configuration.
Data Domain systems use the term MTree to describe directories. When you configure a directory
path, DD OS creates an MTree where the data will reside.
Item Description
Active Directory/Kerberos Expand this panel to enable, disable, and configure Active
Authentication Directory Kerberos authentication.
Item Description
Item Description
NFS protocol
The NFS Protocol settings section allows you to configure the NFS protocol settings. Click Yes to
configure the NFS protocol settings, or click No to skip NFS configuration.
Data Domain systems use the term MTree to describe directories. When you configure a directory
path, DD OS creates an MTree where the data will reside.
Item Description
DD VTL protocol
The DD VTL Protocol settings section allows you to configure the Data Domain Virtual Tape
Library settings. Click Yes to configure the DD VTL settings, or click No to skip DD VTL
configuration.
Item Description
Drive Model Select the desired model from the drop-down list:
l IBM-LTO-1
l IBM-LTO-2
Item Description
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
Number of CAPs (Optional) Enter the number of cartridge access ports (CAPs):
l Up to 100 CAPs per library
l Up to 1000 CAPs per system
Changer Model Name Select the desired model from the drop-down list:
l L180 (default)
l RESTORER-L180
l TS3500
l I2000
l I6000
l DDVTL
Starting Barcode Enter the desired barcode for the first tape, in the format
A990000LA.
Tape Capacity (Optional) Enter the tape capacity. If not specified, the capacity is
derived from the last character of the barcode.
Item Description
Group Name Enter a unique name of from 1 - 128 characters. Duplicate access groups
are not supported.
Initiators Select one or more initiators. Optionally, replace the initiator name by
entering a new one. An initiator is a backup client that connects to a
system to read and write data using the Fibre Channel (FC) protocol. A
specific initiator can support DD Boost over FC or DD VTL, but not both.
Item Description
Devices The devices (drives and changer) to be used are listed. These are
available on all endpoints. An endpoint is the logical target on the Data
Domain system to which the initiator connects.
The following example shows SSH login to a system named mysystem using SSH
client software.
4 GB modelsa 5 10
8 GB modelsb 10 15
Note: Initial HA system set-up cannot be configured from the DD System Manager, but the
status of a configured HA system can be viewed from DD System Manager.
System upgrade operations that require data conversion cannot start until both systems are
upgraded to the same level and HA state is fully restored.
Rebooting a system
Reboot a system when a configuration change, such as changing the time zone, requires that you
reboot the system.
About this task
Procedure
1. Select Maintenance > System > Reboot System.
2. Click OK to confirm.
Note: This output sample is from a healthy system. If the system is being shut down to
replace a failed component, the HA System Status will be degraded, and one or both
nodes will show offline for the HA State.
3. Run the alerts show current command. For HA pairs, run the command on the active
node first, and then the standby node.
4. For HA systems, run the ha offline command if the system is in a highly available state
with both nodes online. Skip this step if the HA status is degraded.
5. Run the system poweroff command. For HA pairs, run the command on the active node
first, and then the standby node.
# system poweroff
Power a system on
About this task
Restore power to the Data Domain system when the system downtime is complete.
Procedure
1. Power on any expansion shelves before powering on the Data Domain controller. Wait
approximately three minutes after all expansion shelves are turned on.
Note: A controller is the chassis and any internal storage. A Data Domain system refers
to the controller and any optional external storage.
2. Plug in the power cord for your controller, and if there is a power button on the controller,
press the power button (as shown in the Installation and Setup Guide for your Data Domain
system). For HA systems, power on the active node first, and then the standby node.
Note: Some Data Domain appliances do not have a traditional power button, and are
designed to be "always on", and will power up as soon as AC power is applied.
6. Run the alerts show current command. For HA pairs, run the command on the active
node first, and then the standby node.
These are tasks that you should plan to do prior to the upgrade. These tasks are not performed
automatically by any process.
1. Reboot the Data Domain system. For HA systems, follow the reboot instructions described in
Upgrade considerations for HA systems on page 57 after performing the rest of the checks in
this section.
2. Check for current alerts; this can reveal many such disk and other hardware failures that
should be addressed prior to upgrading:
7. Run the ha status command to verify the HA system status displays as highly
available after the standby node reboots.
8. Run the ha failover command to initiate a failover from the active node to the standby
node.
9. Run the ha status command to verify the node 0 is the active node and node 1 is the
standby node.
# ha status
HA System Name: apollo-ha3a.emc.com
HA System Status: highly available
Node Name Node ID Role HA State
-------------------------- --------- --------- --------
apollo-ha3a-p0.emc.com 0 active online
apollo-ha3a-p1.emc.com 1 standby online
-------------------------- --------- --------- --------
Initiate the upgrade from the active node. DD OS automatically recognizes the HA system and
performs the upgrade procedure on both nodes. The HA upgrade runs in the following sequence:
1. The standby node is upgraded first, then reboots.
2. After the reboot is complete, the HA system initiates a failover and the standby node takes
over as the active node.
3. The original active node is upgraded, then reboots and remains as the standby node.
After both nodes are upgraded, the system does not perform another failover to return the nodes
to their original configuration.
After the upgrade procedure is complete, run the ha status command again to verify that the
system is in a highly available state, and both nodes are online.
Optionally run the ha failover command to return the nodes to the roles they were in before
the upgrade.
Automatic tasks performed by the upgrade script (in the .rpm file) prior to upgrade
These tests precede the actual upgrade process on the Data Domain system:
7. To verify the upgrade package integrity, click View Checksum and compare the calculated
checksum displayed in the dialog to the authoritative checksum on the Online Support site.
8. To manually initiate an upgrade precheck, select an upgrade package and click Upgrade
Precheck.
2. Select Data Management > File System, and verify that the file system is enabled and
running.
3. Select Maintenance > System.
4. From the Upgrade Packages Available on Data Domain System list, select the package to
use for the upgrade.
Note: You must select an upgrade package for a newer version of DD OS. DD OS does
not support downgrades to previous versions.
6. Verify the version of the upgrade package, and click OK to continue with the upgrade.
The System Upgrade dialog displays the upgrade status and the time remaining.
When upgrading the system, you must wait for the upgrade to complete before using DD
System Manager to manage the system. If the system restarts, the upgrade might continue
after the restart, and DD System Manager displays the upgrade status after login. If
possible, keep the System Upgrade progress dialog open until the upgrade completes or the
system powers off. When upgrading DD OS Release 5.5 or later to a newer version, and if
the system upgrade does not require a power off, a Login link appears when the upgrade is
complete.
Note: To view the status of an upgrade using the CLI, enter the system upgrade
status command. Log messages for the upgrade are stored in /ddvar/log/debug/
7. If the system powers down, you must remove AC power from the system to clear the prior
configuration. Unplug all of the power cables for 30 seconds and then plug them back in.
The system powers on and reboots.
8. If the system does not automatically power on and there is a power button on the front
panel, press the button.
After you finish
The following requirements may apply after completing an upgrade.
l For environments that use self-signed SHA-256 certificates, the certificates must be
regenerated manually after the upgrade process is complete, an trust must be re-established
with external systems that connect to the Data Domain system.
1. Run the adminaccess certificate generate self-signed-cert regenerate-ca
command to regenerate the self-signed CA and host certificates. Regenerating the
certificates breaks existing trust relationships with external systems.
2. Run the adminaccess trust add host hostname type mutual command to reestablish
mutual trust between the Data Domain system and the external system.
l If the system shows existing or configured FC ports with missing WWPN or WWNN
information, or reports that no FC host bus adapter (HBA) driver is installed, run the
scsitarget endpoint enable all command.
Replication notes
With collection replication, no files are visible on the destination Data Domain system if replication
was not finished before starting the upgrade. After the upgrade, wait until replication completes to
see files on the destination.
ConnectEMC notes
In this release, ConnectEMC has been changed to support the Secure Remote Service Virtual
Edition (Secure Remote Services VE) gateway. This change requires a reconfiguration of the Data
Domain system to ConnectEMC after the upgrade.
Note: ConnectEMC only works with Service Remote Services VE (V3) and can not send emails
with older versions of Service Remote Services or on it s own. If ConnectEMC has been used
with previous releases of DD OS (e.g., 5.7 or 5.6), the Service Remote Services VE server
configuration will need to be re-entered as it was removed during the upgrade process due to
the to technology upgrade.
Note: If an older Service Remote Services gateway is being used, the Service Remote Services
VE gateway will need to be implemented to allow for secure communications.
During the upgrade, if ConnectEMC is detected as configured, the existing configuration will be
removed. In addition, if the support notification method is configured as ConnectEMC to send
event messages to the company, it will switch to email. After the upgrade, you can reconfigure the
ConnectEMC with new ConnectEMC command: support connectemc device register.
After ConnectEMC is configured, enable ConnectEMC with support notification method
set connectemc.
In this example there are 14 disks in use in dg2 and each disk has a capacity of 2.7 TiB, therefore
N=14 and C= 2.7 TiB
Use the formula (N-R) x C to get the usable capacity. In this example, the equation is (14-2) x 2.7
TiB.
12 x 2.7 TiB = 32.4 TiB, or 35.6 TB.
Note: The calculated value may not match exactly with the output of the storage show
all command due to the way the capacity values are rounded for display. The disk show
hardware command displays the disk capacity with additional decimal places.
Overview tab
The Overview tab displays information for all disks in the Data Domain system organized by type.
The categories that display are dependent on the type of storage configuration in use.
The Overview tab lists the discovered storage in one or more of the following sections.
l Active Tier
Disks in the Active Tier are currently marked as usable by the file system. Disks are listed in
two tables, Disks in Use and Disks Not in Use.
l Retention Tier
If the optional Data Domain Extended Retention (formerly DD Archiver) license is installed, this
section shows the disks that are configured for DD Extended Retention storage. Disks are
listed in two tables, Disks in Use and Disks Not in Use.
l Cache Tier
SSDs in the Cache Tier are used for caching metadata. The SSDs are not usable by the file
system. Disks are listed in two tables, Disks in Use and Disks Not in Use.
l Cloud Tier
Disks in the Cloud Tier are used to store the metadata for data that resides in cloud storage.
The disks are not usable by the file system. Disks are listed in two tables, Disks in Use and
Disks Not in Use.
l Addable Storage
For systems with optional enclosures, this section shows the disks and enclosures that can be
added to the system.
l Failed/Foreign/Absent Disks (Excluding Systems Disks)
Shows the disks that are in a failed state; these cannot be added to the system Active or
Retention tiers.
l Systems Disks
Shows the disks where the DD OS resides when the Data Domain controller does not contain
data storage disks.
l Migration History
Shows the history of migrations.
Each section heading displays a summary of the storage configured for that section. The summary
shows tallies for the total number of disks, disks in use, spare disks, reconstructing spare disks,
available disks, and known disks.
Click a section plus (+) button to display detailed information, or click the minus (-) button to hide
the detailed information.
Item Description
Disk Group The name of the disk group that was created by the file
system (for example, dg1).
Disks Reconstructing The disks that are undergoing reconstruction, by disk ID (for
example, 1.11).
Total Disks The total number of usable disks (for example, 14).
Disks The disk IDs of the usable disks (for example, 2.1-2.14).
Size The size of the disk group (for example, 25.47 TiB).
Item Description
Pack The disk pack, 1-4, within the enclosure where the disk is
located. This value will only be 2-4 for DS60 expansion
shelves.
State The status of the disk, for example In Use, Available, Spare.
Item Description
Size The data storage capacity of the disk when used in a Data
Domain system.a
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Enclosures tab
The Enclosures tab displays a table summarizing the details of the enclosures connected to the
system.
The Enclosures tab provides the following details.
Item Description
Disk Size The data storage capacity of the disk when used in a Data
Domain system.a
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturer’s rating.
Disks tab
The Disks tab displays information on each of the system disks. You can filter the disks viewed to
display all disks, disks in a specific tier, or disks in a specific group.
The Disk State table displays a summary status table showing the state of all system disks.
Item Description
Spare (reconstructing) The number of disks that are in the process of data
reconstruction (spare disks replacing failed disks).
Item Description
Not Installed The number of empty disk slots that the system can detect.
The Disks table displays specific information about each disk installed in the system.
Item Description
Pack The disk pack, 1-4, within the enclosure where the disk is
located. This value will only be 2-4 for DS60 expansion
shelves.
State The status of the disk, which can be one of the following.
l Absent. No disk is installed in the indicated location.
l Available. An available disk is allocated to the active or
retention tier, but it is not currently in use.
l Copy Recovery. The disk has a high error rate but is not
failed. RAID is currently copying the contents onto a spare
drive and will fail the drive once the copy reconstruction is
complete.
l Destination. The disk is in use as the destination for
storage migration.
l Error. The disk has a high error rate but is not failed. The
disk is in the queue for copy reconstruction. The state will
Dell EMC Data Domain® Operating System Administration Guide 67
Managing Data Domain Systems
Item Description
Disk Life Used The percentage of an SSD's rated life span consumed.
Reconstruction tab
The Reconstruction tab displays a table that provides additional information on reconstructing
disks.
The following table describes the entries in the Reconstructing table.
Item Description
Disk Identifies disks that are being reconstructed. Disk labels are of
the format enclosure.disk. Enclosure 1 is the Data Domain
system, and external shelves start numbering with enclosure 2.
For example, the label 3.4 is the fourth disk in the second
shelf.
Disk Group Shows the RAID group (dg#) for the reconstructing disk.
Item Description
Tier The name of the tier where the failed disk is being
reconstructed.
When a spare disk is available, the file system automatically replaces a failed disk with a spare and
begins the reconstruction process to integrate the spare into the RAID disk group. The disk use
displays Spare and the status becomes Reconstructing. Reconstruction is performed on one
disk at a time.
The Beaconing Disk dialog box appears, and the LED light on the disk begins flashing.
Configuring storage
Storage configuration features allow you to add and remove storage expansion enclosures from
the active, retention, and cloud tiers. Storage in an expansion enclosure (which is sometimes called
an expansion shelf) is not available for use until it is added to a tier.
About this task
Note: Additional storage requires the appropriate license or licenses and sufficient memory to
support the new storage capacity. Error messages display if more licenses or memory is
needed.
DD6300 systems support the option to use ES30 enclosures with 4 TB drives ( 43.6 TiB) at 50%
utilization (21.8 TiB) in the active tier if the available licensed capacity is exactly 21.8 TiB. The
following guidelines apply to using partial capacity shelves.
l No other enclosure types or drive sizes are supported for use at partial capacity.
l A partial shelf can only exist in the Active tier.
l Only one partial ES30 can exist in the Active tier.
l Once a partial shelf exists in a tier, no additional ES30s can be configured in that tier until the
partial shelf is added at full capacity.
Note: This requires licensing enough additional capacity to use the remaining 21.8 TiB of
the partial shelf.
l If the available capacity exceeds 21.8 TB, a partial shelf cannot be added.
l Deleting a 21 TiB license will not automatically convert a fully-used shelf to a partial shelf. The
shelf must be removed, and added back as a partial shelf.
Procedure
1. Select Hardware > Storage > Overview.
2. Expand the dialog for one of the available storage tiers:
l Active Tier
l Extended Retention Tier
l Cache Tier
l Cloud Tier
3. Click Configure.
4. In the Configure Storage dialog, select the storage to be added from the Addable Storage
list.
5. In the Configure list, select either Active Tier or Retention Tier.
The maximum amount of storage that can be added to the active tier depends on the DD
controller used.
Note: The licensed capacity bar shows the portion of licensed capacity (used and
remaining) for the installed enclosures.
16 TB to 32 TB 16 GB 6 x 4 TB HDDs N/A
The Data Domain DD3300 Field Replacement and Upgrade Guide provides detailed instructions for
expanding system capacity.
Capacity Expand
Select the target capacity from the Select Capacity drop-down list. A capacity expansion can be
prevented by insufficient memory, insufficient physical capacity (HDDs), the system has already
been expanded, or the target for capacity expansion is not supported. If the capacity expansion
cannot be completed, the reason will display here.
Fail a disk
Fail a disk and force reconstruction. Select Hardware > Storage > Disks > Fail.
Select a disk from the table and click Fail.
Unfail a disk
Make a disk previously marked Failed or Foreign usable to the system. Select Hardware >
Storage > Disks > Unfail.
Select a disk from the table and click Unfail.
Floating IP addresses only exist in the two-node HA system; during failover, the IP address "float"
to the new active node and are:
l Only configured on the active node
l Used for filesystem access and most configuration
l Can only be static
l Configuration requires the type floating argument
l There are some restrictions for interfaces with IPv6 addresses. For example, the minimum
MTU is 1280. If you try to set the MTU lower than 1280 on an interface with an IPv6 address,
an error message appears and the interface is removed from service. An IPv6 address can
affect an interface even though it is on a VLAN attached to the interface and not directly on
the interface.
Procedure
1. Select Hardware > Ethernet > Interfaces.
The following table describes the information on the Interfaces tab.
Item Description
Interface The name of each interface associated with the selected system.
Enabled Whether the interface is enabled.
l Select Yes to enable the interface and connect it to the network.
l Select No to disable the interface and disconnect it from the
network.
IP Address IP address associated with the interface. The address used by the
network to identify the interface. If the interface is configured
through DHCP, an asterisk appears after this value.
Additional Info Additional settings for the interface. For example, the bonding
mode.
IPMI interfaces Displays Yes or No and indicates if IPMI health monitoring and power
configured management is configured for the interface.
2. To filter the interface list by interface name, enter a value in the Interface Name field and
click Update.
Filters support wildcards, such as eth*, veth*, or eth0*
3. To filter the interface list by interface type, select a value from the Interface Type menu and
click Update.
On an HA system, there is a filter dropdown to filter by IP Address Type (Fixed, Floating, or
Interconnect).
4. To return the interfaces table to the default listing, click Reset.
5. Select an interface in the table to populate the Interface Details area.
Item Description
Auto Negotiate When this feature displays Enabled, the interface automatically
negotiates Speed and Duplex settings. When this feature displays
Disabled, then Speed and Duplex values must be set manually.
Cable Shows whether the interface is Copper or Fiber.
Note: Some interfaces must be up before the cable status is
valid.
Duplex Used in conjunction with the Speed value to set the data transfer
protocol. Options are Unknown, Full, Half.
Hardware Address The MAC address of the selected interface. For example,
00:02:b3:b0:8a:d2.
Latent Fault Detection The LFD field has a View Configuration link, displaying a
(LFD) - HA systems pop-up that lists LFD addresses and interfaces.
only
Speed Used in conjunction with the Duplex value to set the rate of data
transfer. Options are Unknown, 10 Mb/s, 100 Mb/s, 1000 Mb/s, 10
Gb/s.
Note: Auto-negotiated interfaces must be set up before speed,
duplex, and supported speed are visible.
Supported Speeds Lists all of the speeds that the interface can use.
6. To view IPMI interface configuration and management options, click View IPMI Interfaces.
This link displays the Maintenance > IPMI information.
l DD2500 systems provide six on-board interfaces. The four on-board 1G Base-T NIC ports are
ethMa (top left), ethMb (top right), ethMc (bottom left), and ethMd (bottom right). The two
on-board 10G Base-T NIC ports are ethMe (top) and ethMf (bottom).
l DD4200, DD4500, and DD7200 systems provide one on-board Ethernet port, which is ethMa.
l For systems ranging between DD140 and DD990, the physical interface names for I/O modules
start at the top of the module or at the left side. The first interface is ethxa, the next is ethxb,
the next is ethxc, and so forth.
l The port numbers on the horizontal DD2500 I/O modules are labeled in sequence from the end
opposite the module handle (left side). The first port is labeled 0 and corresponds to physical
interface name ethxa, the next is 1/ethxb, the next is 2/ethxc, and so forth.
l The port numbers on the vertical DD4200, DD4500, and DD7200 I/O modules are labeled in
sequence from the end opposite the module handle (bottom). The first port is labeled 0 and
corresponds to physical interface name ethxa, the next is 1/ethxb, the next is 2/ethxc, and so
forth.
3. Click Configure.
4. In the Configure Interface dialog, determine how the interface IP address is to be set:
Note: On an HA system, the Configure Interface dialog has a field for whether or not to
designate the Floating IP (Yes/No). Selecting Yes the Manually Configure IP
Address radio button is auto-selected; Floating IP interfaces can only be manually
configured.
l Use DHCP to assign the IP address—in the IP Settings area, select Obtain IP Address
using DHCP and select either DHCPv4 for IPv4 access or DHCPv6 for IPv6 access.
Setting a physical interface to use DHCP automatically enables the interface.
Note: If you choose to obtain the network settings through DHCP, you can manually
configure the hostname at Hardware > Ethernet > Settings or with the net set
hostname command. You must manually configure the host name when using DHCP
over IPv6.
l Specify IP Settings manually—in the IP Settings area, select Manually configure IP
Address.
The IP Address and Netmask fields become active.
5. If you chose to manually enter the IP address, enter an IPv4 or IPv6 address. If you entered
an IPv4 address, enter a netmask address.
Note: You can assign just one IP address to an interface with this procedure. If you
assign another IP address, the new address replaces the old address. To attach an
additional IP address to an interface, create an IP alias.
7. Specify the MTU (Maximum Transfer Unit) size for the physical (Ethernet) interface.
Do the following:
l Click the Default button to return the setting to the default value.
l Ensure that all of your network components support the size set with this option.
9. Click Next.
The Configure Interface Settings summary page appears. The values listed reflect the new
system and interface state, which are applied after you click Finish.
7. If you selected Balanced or LACP mode, specify a bonding hash type in the Hash list.
Options are: XOR-L2, XOR-L2L3, or XOR-L3L4.
XOR-L2 transmits through a bonded interface with an XOR hash of Layer 2 (inbound and
outbound MAC addresses).
XOR-L2L3 transmits through a bonded interface with an XOR hash of Layer 2 (inbound and
outbound MAC addresses) and Layer 3 (inbound and outbound IP addresses).
XOR-L3L4 transmits through a bonded interface with an XOR hash of Layer 3 (inbound and
outbound IP addresses) and Layer 4 (inbound and outbound ports).
8. To select an interface to add to the aggregate configuration, select the checkbox that
corresponds to the interface, and then click Next.
The Create virtual interface veth_name dialog appears.
9. Enter an IP address, or enter 0 to specify no IP address.
10. Enter a netmask address or prefix.
11. Specify Speed/Duplex options.
The combination of speed and duplex settings define the rate of data transfer through the
interface. Select either:
l Autonegotiate Speed/Duplex
Select this option to allow the network interface card to autonegotiate the line speed and
duplex setting for an interface.
l Manually configure Speed/Duplex
Select this option to manually set an interface data transfer rate.
n Duplex options are half-duplex or full-duplex.
n Speed options listed are limited to the capabilities of the hardware device. Options
are 10 Mb, 100 Mb, 1000 Mb, and 10 Gb.
Configuring a VLAN
Create a new VLAN interface from either a physical interface or a virtual interface.
About this task
The recommended total VLAN count is 80. You can create up to 100 interfaces (minus the number
of aliases, physical and virtual interfaces) before the system prevents you from creating any more.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. In the interfaces table, select the interface to which you want to add the VLAN.
The interface you select must be configured with an IP address before you can add a VLAN.
3. Click Create and selectVLAN.
4. In the Create VLAN dialog box, specify a VLAN ID by entering a number in the VLAN Id box.
The range of a VLAN ID is between 1 and 4094 inclusive.
9. Click Next.
The Create VLAN summary page appears.
10. Review the configuration settings, click Finish, and click OK.
Configuring an IP alias
An IP alias assigns an additional IP address to a physical interface, a virtual interface, or a VLAN.
About this task
The recommended total number of IP aliases, VLAN, physical, and virtual interfaces that can exist
on the system is 80. Although up to 100 interfaces are supported, as the maximum number is
approached, you might notice slowness in the display.
Note: When using a Data Domain HA system, if a user is created and logins to the standby
node without logging into the active node first, the user will not have a default alias to use.
Therefore, in order to use aliases on the standby node, the user should login to the active node
first.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. Click Create, and select IP Alias.
The Create IP Alias dialog appears.
7. Click Next.
The Create IP Alias summary page appears.
Destroying an interface
You can use DD System Manager to destroy or delete virtual, VLAN, and IP alias interfaces.
About this task
When a virtual interface is destroyed, the system deletes the virtual interface, releases its bonded
physical interface, and deletes any VLANs or aliases attached to the virtual interface. When you
delete a VLAN interface, the OS deletes the VLAN and any IP alias interfaces that are created
under it. When you destroy an IP alias, the OS deletes only that alias interface.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. Click the box next to each interface you want to destroy (Virtual or VLAN or IP Alias).
3. Click Destroy.
4. Click OK to confirm.
Domain Name
The fully qualified domain name associated with the selected system.
Hosts Mapping
IP Address
IP address of the host to resolve.
Host Name
Hostnames associated with the IP address.
DNS List
DNS IP Address
Current DNS IP addresses associated with the selected system. An asterisk (*) indicates
that the IP addresses were assigned through DHCP.
d. Click OK.
The system displays progress messages as the changes are applied.
4. To obtain the host and domain names from a DHCP server, select Obtain Settings using
DHCP and click OK.
At least one interface must be configured to use DHCP.
Within each routing table, static routes can be added, but because source routing is used to get
packets to the table, the only static routes that will work are static routes that use the interface
that has the source address of each table. Otherwise it needs to be put into the main table.
Other than the IPv4 source routing done to these other routing tables, Data Domain systems use
source-based routing for the main routing IPv4 and IPv6 tables, which means that outbound
network packets that match the subnet of multiple interfaces are routed only over the physical
interface whose IP address matches the source IP address of the packets, which is where they
originated.
For IPv6, set static routes when multiple interfaces contain the same IPv6 subnets, and the
connections are being made to IPv6 addresses with this subnet. Normally, static routes are not
needed with IPv4 addresses with the same subnet, such as for backups. There are cases in which
static addresses may be required to allow connections to work, such as connections from the Data
Domain system to remote systems.
Static routes can be added and deleted from individual routing tables by adding or deleting the
table from the route specification. This provides the rules to direct packets with specific source
addresses through specific route tables. If a static route is required for packets with those source
addresses, the routes must be added the specific table where the IP address is routed.
Note: Routing for connections initiated from the Data Domain system, such as for replication,
depends on the source address used for interfaces on the same subnet. To force traffic for a
specific interface to a specific destination (even if that interface is on the same subnet as
other interfaces), configure a static routing entry between the two systems: this static routing
overrides source routing. This is not needed if the source address is IPv4 and has a default
gateway associated with it. In that case, the source routing is already handled via its own
routing table.
Item Description
Destination The destination host/network where the network traffic (data) is sent.
Genmask The netmask for the destination net. Set to 255.255.255.255 for a host
destination and 0.0.0.0 for the default route.
Metric The distance to the target (usually counted in hops). Not used by the
DD OS, but might be needed by routing daemons.
Item Description
MTU Maximum Transfer Unit (MTU) size for the physical (Ethernet)
interface.
Window Default window size for TCP connections over this route.
IRTT Initial RTT (Round Trip Time) used by the kernel to estimate the best
TCP protocol parameters without waiting on possibly slow answers.
5. Optionally, specify the gateway to use to connect to the destination network or host.
7. Click Finish.
8. After the process is completed, click OK.
The new route specification is listed in the Route Spec list.
Results
The system passphrase is set and the Change Passphrase button replaces the Set Passphrase
button.
limited-admin
The limited-admin role can configure and monitor the Data Domain system with some
limitations. Users who are assigned this role cannot perform data deletion operations, edit the
registry, or enter bash or SE mode.
user
The user role enables users to monitor systems and change their own password. Users who
are assigned the user management role can view system status, but they cannot change the
system configuration.
backup-operator
A backup-operator role user can perform all tasks permitted for user role users, create
snapshots for MTrees, import, export, and move tapes between elements in a virtual tape
library, and copy tapes across pools.
A backup-operator role user can also add and delete SSH public keys for non-password-
required log ins. (This function is used mostly for automated scripting.) He or she can add,
delete, reset and view CLI command aliases, synchronize modified files, and wait for
replication to complete on the destination system.
none
The none role is for DD Boost authentication and tenant-unit users only. A none role user can
log in to a Data Domain system and can change his or her password, but cannot monitor,
manage, or configure the primary system. When the primary system is partitioned into tenant
units, either the tenant-admin or the tenant-user role is used to define a user's role with
respect to a specific tenant unit. The tenant user is first assigned the none role to minimize
access to the primary system, and then either the tenant-admin or the tenant-user role is
appended to that user.
tenant-admin
A tenant-admin role can be appended to the other (non-tenant) roles when the Secure Multi-
Tenancy (SMT) feature is enabled. A tenant-admin user can configure and monitor a specific
tenant unit.
tenant-user
A tenant-user role can be appended to the other (non-tenant) roles when the SMT feature is
enabled. The tenant-user role enables a user to monitor a specific tenant unit and change the
user password. Users who are assigned the tenant-user management role can view tenant unit
status, but they cannot change the tenant unit configuration.
Item Description
Enabled (Yes/No) The status of the service. If the service is disabled, enable it by
selecting it in the list and clicking Configure. Fill out the General
tab of the dialog box. If the service is enabled, modify its settings
by selecting it in the list and clicking Configure. Edit the settings in
the General tab of the dialog box.
Allowed Hosts The host or hosts that can access the service.
Service Options The port or session timeout value for the service selected in the
list.
HTTP port The port number opened for the HTTP protocol (port 80, by
default).
HTTPS port The port number opened for the HTTPS protocol (port 443, by
default).
SSH/SCP port The port number opened for the SSH/SCP protocol (port 22, by
default).
Session Timeout The amount of inactive time allowed before a connection closes.
The default is Infinite, that is, the connection does not close. If
possible, set a session timeout maximum of five minutes. Use the
Advanced tab of the dialog box to set a timeout in seconds.
4. To set a session timeout, select the Advanced tab, and enter the timeout value in seconds.
Note: The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
If FTPS is enabled, a warning message appears with a prompt to click OK to proceed.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit (pencil). Change
the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete (X).
4. To set a session timeout, select the Advanced tab and enter the timeout value in seconds.
Note: The session timeout default is Infinite, that is, the connection does not close.
5. Click OK. If FTP is enabled, a warning message appears and prompts you to click OK to
proceed.
4. To configure system ports and session timeout values, select the Advanced tab, and
complete the form.
l In the HTTP Port box, enter the port number. Port 80 is assigned by default.
l In the HTTPS Port box, enter the number. Port 443 is assigned by default.
l In the Session Timeout box, enter the interval in seconds that must elapse before a
connection closes. The minimum is 60 seconds and the maximum is 31536000 seconds
(one year).
5. Click OK.
4. To configure system ports and session timeout values, click the Advanced tab.
l In the SSH/SCP Port text entry box, enter the port number. Port 22 is assigned by
default.
l In the Session Timeout box, enter the interval in seconds that must elapse before
connection closes.
Note: The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
4. To set a session timeout, select the Advanced tab and enter the timeout value in seconds.
Note: The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
Item Description
Last Login From The location where the user last logged in.
Last Login Time The time the user last logged in.
Note: User accounts configured with the admin or security officer roles can view all
users. Users with other roles can view only their own user accounts.
2. Select the user you want to view from the list of users.
Information about the selected user displays in the Detailed Information area.
Item Description
Password Last Changed The date the password was last changed.
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Item Description
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
Note: The default values are the initial default password policy values. A system
administrator (admin role) can change them by selecting More Tasks > Change Login
Options.
Item Description
Password The user password. Set a default password, and the user can
change it later.
Management Role The role assigned to the user, which can be admin, user, security,
backup-operator, or none. .
Note: Only the sysadmin user (the default user created during
the DD OS installation) can create the first security-role user.
After the first security-role user is created, only security-role
users can create other security-role users.
Force Password Change Select this checkbox to require that the user change the password
during the first login when logging in to DD System Manager or to
the CLI with SSH or Telnet.
The default value for the minimum length of a password is 6 characters. The default value
for the minimum number of character classes required for a user password is 1. Allowable
character classes include:
l Lowercase letters (a-z)
4. To manage password and account expiration, select the Advanced tab and use the controls
described in the following table.
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
Disable account on the Check this box and enter a date (mm/dd/yyyy) when you want to
following date disable this account. Also, you can click the calendar to select a
date.
5. Click OK.
Note: Note: The default password policy can change if an admin-role user changes them
(More Tasks > Change Login Options). The default values are the initial default
password policy values.
Item Description
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
6. Click OK.
3. Specify the new configuration in the boxes for each option. To select the default value, click
Default next to the appropriate option.
4. Click OK to save the password settings.
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. This value must be less than the Maximum Days
Between Change value minus the Warn Days Before Expire value.
The default setting is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. The minimum value is 1. The default value is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. This value must be less than the Maximum Days Between
Change value minus the Minimum Days Between Change value.
The default setting is 7.
Disable Days After Expire The system disables a user account after password expiration
according to the number of days specified with this option. Valid
entries are never or number greater than or equal to 0. The default
setting is never.
Minimum Number of The minimum number of character classes required for a user
Character Classes password. Default is 1. Character classes include:
l Lowercase letters (a-z)
l Uppercase letters (A-Z)
l Numbers (0-9)
l Special Characters ($, %, #, +, and so on)
Lowercase Character Enable or disable the requirement for at least one lowercase
Requirement character. The default setting is disabled.
Item Description
Uppercase Character Enable or disable the requirement for at least one uppercase
Requirement character. The default setting is disabled.
One Digit Requirement Enable or disable the requirement for at least one numerical
character. The default setting is disabled.
Special Character Enable or disable the requirement for at least one special character.
Requirement The default setting is disabled.
Max Consecutive Enable or disable the requirement for a maximum of three repeated
Character Requirement characters. The default setting is disabled.
Maximum login attempts Specifies the maximum number of login attempts before a
mandatory lock is applied to a user account. This limit applies to all
user accounts, including sysadmin. A locked user cannot log in
while the account is locked. The range is 4 to 10, and the default
value is 4.
Unlock timeout Specifies how long a user account is locked after the maximum
(seconds) number of login attempts. When the configured unlock timeout is
reached, a user can attempt login. The range is 120 to 600 seconds,
and the default period is 120 seconds.
Maximum active logins Specifies the maximum number of active logins to allow. The
default value is 100.
Item Description
Organizational Unit The name of the organizations unit for the Workgroup or Active
Directory.
CIFS Server Name The name of the CIFS server in use (Windows mode only).
WINS Server The name of the WINS server in use (Windows mode only).
Key Distribution Centers Hostname(s) or IP(s) of KDC in use (UNIX mode only)
Item Description
Management Role The role of the group (admin, user, and so on)
Note: Use the complete realm name. Ensure that the user is assigned sufficient
privileges to join the system to the domain. The user name and password must be
compatible with Microsoft requirements for the Active Directory domain. This user must
also be assigned permission to create accounts in this domain.
6. Select the default CIFS server name, or select Manual and enter a CIFS server name.
7. To select domain controllers, select Automatically assign, or select Manual and enter up to
three domain controller names.
You can enter fully qualified domain names, hostnames, or IP (IPv4 or IPv6) addresses.
8. To select an organizational unit, select Use default Computers, or select Manual and enter
an organization unit name.
Note: The account is moved to the new organizational unit.
9. Click Next.
The Summary page for the configuration appears.
10. Click Finish.
The system displays the configuration information in the Authentication view.
11. To enable administrative access, click Enable to the right of Active Directory
Administrative Access.
Procedure
1. Click Create....
2. Enter the domain and group name separated by a backslash. For example: domainname
\groupname.
3. Select the management role for the group from the drop-down menu.
4. Click OK.
Item Description
Item Description
4. For Workgroup Name, select Manual and enter a workgroup name to join, or use the
default.
The Workgroup mode joins a Data Domain system to a workgroup domain.
5. For CIFS Server Name, select Manual and enter a server name (the DDR), or use the
default.
6. Click OK.
Item Description
Item Description
Management Role The role of the group (admin, user, and so on).
4. Click OK.
10. If necessary at a later time, click Reset to return the LDAP configuration to its default
values.
5. Click OK.
2. Remove one or more LDAP servers by using the authentication ldap servers del
command:
# authentication ldap servers del 10.X.Y.Z:400
LDAP server(s) deleted.
LDAP Servers: 1
# Server
- ------------ ---------
1 10.A.B.C (primary)
- ------------ ---------
3. Remove all LDAP servers by using the authentication ldap servers reset
command:
# authentication ldap servers reset
LDAP server list reset to empty.
2. Reset the LDAP base suffix by using the authentication ldap base reset
command:
# authentication ldap base reset
LDAP base-suffix reset to empty.
If binddn is set using client-auth CLI, but bindpw is not provided, unauthenticated access is
requested.
# authentication ldap client-auth set binddn "cn=Manager,dc=u2,dc=team"
Enter bindpw:
** Bindpw is not provided. Unauthenticated access would be requested.
LDAP client authentication binddn set to "cn=Manager,dc=u2,dc=team".
Procedure
1. Set the Bind DN and password by using the authentication ldap client-auth set
binddn command:
# authentication ldap client-auth set binddn
"cn=Administrator,cn=Users,dc=anvil,dc=team"
Enter bindpw:
LDAP client authentication binddn set to
"cn=Administrator,cn=Users,dc=anvil,dc=team".
2. Reset the Bind DN and password by using the authentication ldap client-auth
reset command:
Enable LDAP
Before you begin
An LDAP configuration must exist before enabling LDAP. Additionally, you must disable NIS,
ensure that the LDAP server is reachable, and be able to query the root DSE of the LDAP server.
Procedure
1. Enable LDAP by using the authentication ldap enable command:
# authentication ldap enable
The details of the LDAP configuration are displayed for you to confirm before continuing. To
continue, type yes and restart the file system for LDAP configuration to take effect.
2. View the current LDAP configuration by using the authentication ldap show
command:
# authentication ldap show
LDAP configuration
Enabled: no
Base-suffix: dc=anvil,dc=team
Binddn: cn=Administrator,cn=Users,dc=anvil,dc=team
Server(s): 2
# Server
- ---------------- ---------
1 10.26.16.250 (primary)
2 10.26.16.251:400
- ---------------- ---------
The LDAP status is displayed. If the LDAP status is not good, the problem is identified in the
output. For example:
# authentication ldap status
Status: invalid credentials
or
# authentication ldap status
Status: invalid DN syntax
4. Disable LDAP by using the authentication ldap disable command:
# authentication ldap disable
LDAP is disabled.
If tls_reqcert is set to never, an LDAP CA certificate is not required. For more information,
see Configure LDAP server certificate verification with imported CA certificates on page 119.
Procedure
1. Enable SSL by using the authentication ldap ssl enable command:
# authentication ldap ssl enable
Secure LDAP is enabled with ‘ldaps’ method.
The default method is secure LDAP, or ldaps. You can specify other methods, such as TLS:
# authentication ldap ssl enable method start_tls
Secure LDAP is enabled with ‘start_tls’ method.
2. Reset the TLS request certificate behavior by using the authentication ldap ssl
reset tls_reqcert command. The default behavior is demand:
# authentication ldap ssl reset tls_reqcert
tls_reqcert has been set to "demand". LDAP Server certificate will be
verified with imported CA certificate.Use "adminaccess" CLI to import the
CA certificate.
2. Delete a CA certificate for LDAP server certificate verification by using the adminaccess
certificate delete command.
Specify ldap for application:
3. Show current CA certificate information for LDAP server certificate verification by using the
adminaccess certificate show command:
# adminaccess certificate show imported-ca ldap
Item Description
Management Role The role of the group (admin, user, and so on).
4. Click OK.
l To add an authentication server, click Add (+) in the server table, enter the server name,
and click OK.
l To modify an authentication server, select the authentication server name and click the
edit icon (pencil). Change the server name, and click OK.
l To remove an authentication server name, select a server, click the X icon, and click OK.
4. Click OK.
l To modify an NIS group, select the checkbox of the NIS group name in the NIS group list
and click Edit (pencil). Change the NIS group name, and click OK.
l To remove an NIS group name, select the NIS group in the list and click Delete X.
4. Click OK.
Results
Item Description
User Group The name of a user group configured to allow SSO provider users
to access the Data Domain system.
Note: At least one user group is required to use SSO.
Management Role The level of management privileges associated with a user group.
3. Refresh the Single Sign-On (SSO) panel in DD SM to confirm that the Data Domain system
is registered with DPC.
4. Click OK.
To diagnose issues joining the Data Domain system to an active Directory Domain, provide:
l Active Directory server IP address
l Active Directory server FQDN
l Active Directory service username
l Active Directory service password
6. Click Diagnose.
7. View the report.
l Click View Report to view the report online. Each item in the Action Items table can be
clicked for additional details.
l Click Download to download a copy of the report.
8. Review and implement the suggested fixes for the issue, and retry the operation.
8. Click OK.
If a security policy is configured, the system prompts for security officer credentials.
Provide the credentials and click OK.
3. Specify the name of the mail server in the Mail Server field.
4. Use the Credentials button to enable or disable the use of credentials for the mail server.
5. If credentials are enabled, specify the mail server username in the User Name field.
6. If credentials are enabled, specify the mail server password in the Password field.
7. Click Set.
8. Optionally use the CLI to verify and troubleshoot the mail server configuration.
a. Run the config show mailserver command to verify the mail server is configured.
b. Run the net ping <mailserver-hostname> count 4 command to ping the mail server.
c. If the mail server is not configured correctly, run the config set mailserver
<mailserver-hostname> command to set the mail server, and attempt to ping it again.
d. Run the net show dns command to verify the DNS server is configured.
e. Run the net ping <DNS-hostname> count 4 command to ping the DNS server.
f. If the DNS server is not configured correctly, run the config set dns <dns-IP>
command to set the DNS server, and attempt to ping it again.
g. Optionally run the net hosts add <IP-address> <hostname> command to add the
mail server IP address and hostname to the Data Domain hosts file for local resolving.
h. Run the net ping <mailserver-hostname> count 4 command to ping the mail server.
2. To change the configuration, select More Tasks > Configure Time Settings.
The Configure Time Settings dialog appears.
3. In the Time Zone dropdown list, select the time zone where the Data Domain system
resides.
4. To manually set the time and date, select None, type the date in the Date box, and select
the time in the Time dropdown lists.
5. To use NTP to synchronize the time, select NTP and set how the NTP server is accessed.
l To use DHCP to automatically select a server, select Obtain NTP Servers using DHCP.
l To configure an NTP server IP address, select Manually Configure, add the IP address
of the server, and click OK.
Note: Using time synchronization from an Active Directory domain controller might
cause excessive time changes on the system if both NTP and the domain controller are
modifying the time.
6. Click OK.
7. If you changed the time zone, you must reboot the system.
a. Select Maintenance > System.
b. From the More Tasks menu, select Reboot System.
c. Click OK to confirm.
2. To change the configuration, select More Tasks > Set System Properties.
The Set System Properties dialog box appears.
3. In the Location box, enter information about where the Data Domain system is located.
4. In the Admin Email box, enter the email address of the system administrator.
5. In the Admin Host box, enter the name of the administration server.
6. Click OK.
SNMP management
The Simple Network Management Protocol (SNMP) is a standard protocol for exchanging
network management information, and is a part of the Transmission Control Protocol/Internet
Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage and
monitor network-attached devices, such as Data Domain systems, for conditions that warrant
administrator attention.
To monitor Data Domain systems using SNMP, you will need to install the Data Domain MIB in your
SNMP Management system. DD OS also supports the standard MIB-II so you can also query MIB-II
statistics for general data such as network statistics. For full coverage of available data you should
utilize both the Data Domain MIB and the standard MIB-II MIB.
The Data Domain system SNMP agent accepts queries for Data Domain-specific information from
management systems using SNMP v1, v2c, and v3. SNMP V3 provides a greater degree of security
than v2c and v1 by replacing cleartext community strings (used for authentication) with user-
based authentication using either MD5 or SHA1. Also, SNMP v3 user authentication packets can
be encrypted and their integrity verified with either DES or AES.
Data Domain systems can send SNMP traps (which are alert messages) using SNMP v2c and
SNMP v3. Because SNMP v1 traps are not supported, if possible, use SNMP v2c or v3.
The default port that is open when SNMP is enabled is port 161. Traps are sent out through port
162.
l The Data Domain Operating System Initial Configuration Guide describes how to set up the Data
Domain system to use SNMP monitoring.
l The Data Domain Operating System MIB Quick Reference describes the full set of MIB
parameters included in the Data Domain MIB branch.
Item Description
SNMP System Location The location of the Data Domain system being monitored.
SNMP System Contact The person designated as the person to contact for the Data
Domain system administration.
SNMP Engine ID A unique hexadecimal identifier for the Data Domain system.
SNMP V3 Configuration
Item Description
Name The name of the user on the SNMP manager with access to the
agent for the Data Domain system.
Access The access permissions for the SNMP user, which can be Read-
only or Read-write.
Authentication Protocols The Authentication Protocol used to validate the SNMP user,
which can be MD5, SHA1, or None.
Privacy Protocol The encryption protocol used during the SNMP user
authentication, which can be AES, DES, or None.
Item Description
Port The port used for SNMP trap communication with the host. For
example, 162 is the default.
User The user on the trap host authenticated to access the Data
Domain SNMP information.
Item Description
Item Description
Port The port used for SNMP trap communication with the host. For
example, 162 is the default.
4. Click Browse and select a browser to view the MIB in a browser window.
Note: If using the Microsoft Internet Explorer browser, enable Automatic prompting for
file download.
4. Click OK.
3. In the Name text field, enter the name of the user for whom you want to grant access to the
Data Domain system agent. The name must be a minimum of eight characters.
4. Select either read-only or read-write access for this user.
5. To authenticate the user, select Authentication.
a. Select either the MD5 or the SHA1 protocol.
b. Enter the authentication key in the Key text field.
c. To provide encryption to the authentication session, select Privacy.
Note: If the Delete button is disabled, the selected user is being used by one or more
trap hosts. Delete the trap hosts and then delete the user.
3. In the Community box, enter the name of a community for whom you want to grant access
to the Data Domain system agent.
4. Select either read-only or read-write access for this community.
5. If you want to associate the community to one or more hosts, add the hosts as follows:
a. Click + to add a host.
The Host dialog box appears.
b. In the Host text field, enter the IP address or domain name of the host.
c. Click OK.
The Host is added to the host list.
6. Click OK.
The new community entry appears in the Communities table and lists the selected hosts.
3. To change the access mode for this community, select either read-only or read-write
access.
Note: The Access buttons for the selected community are disabled when a trap host on
the same system is configured as part of that community. To modify the access setting,
delete the trap host and add it back after the community is modified.
a. Select the checkbox for each host or click the Host check box in the table head to select
all listed hosts.
b. Click the delete button (X).
6. To edit a host name, do the following:
a. Select the checkbox for the host.
b. Click the edit button (pencil icon).
c. Edit the host name.
d. Click OK.
7. Click OK.
The modified community entry appears in the Communities table.
Note: If the Delete button is disabled, the selected community is being used by one or
more trap hosts. Delete the trap hosts and then delete the community.
3. In the Host box, enter the IP address or domain name of the SNMP Host to receive traps.
4. In the Port box, enter the port number for sending traps (port 162 is a common port).
5. Select the user (SNMP V3) or the community (SNMP V2C) from the drop-down menu.
Note: The Community list displays only those communities to which the trap host is
already assigned.
3. To modify the port number, enter a new port number in the Port box (port 162 is a common
port).
4. Select the user (SNMP V3) or the community (SNMP V2C) from the drop-down menu.
Note: The Community list displays only those communities to which the trap host is
already assigned.
from both the nodes will be needed to debug issues related to HA system status (filesystem,
replication, protocols, and HA configuration).
CLI equivalent
2. Click the file name link to view the report using a text editor. If doing so is required by your
browser, download the file first.
Verifying the Data Domain is able to send ASUP and alert emails to external
recipients
Confirm that external email recipients can receive the autosupport (ASUP) and alert emails you
send from your Data Domain device.
About this task
Verify autosupport (ASUP) is getting relayed by the exchange server.
Procedure
1. Check if ASUPs can be sent to a local email address, an email address on the same Mail
Server.
# autosupport send [internal-email-addr]
2. Check if ASUPs can be sent to an email address outside the local mail server.
# autosupport send [external email-addr]
3. If the email does not get to the external email address on the mail server, you may receive
an error such as:
**** Unable to send message: (errno 51: Unrecoverable errors from server--
giving up)
In this case, it is likely that forwarding will need to be enabled for the Data Domain System
on the local mail server by using the steps outlined in the KB article Configure Email Relay on
MS Exchange, available at https://support.emc.com/kb/181900.
4. If the ASUP can be sent to an external email address, but is not getting to the Data Domain,
there may be an issue with the firewall configuration or spam filters.
5. If ASUP alerts are getting to the Data Domain, but they are not causing a case to be
created, it may be due to invalid characters in the subject or body of the alert email. To
verify,
a. Look in a current autosupport and check the HOSTNAME , SYSTEM_ID , and LOCATION
for single quotes or apostrophes. This is an invalid character and must be removed in DD
OS versions 4.9.2.0 and early.
Example:
MODEL_NO=DD510
HOSTNAME=system.datadomain.com
b. Remove any invalid characters from the system HOSTNAME and/or LOCATION. The
commands are
net set hostname <host>
c. Test the new setting by simulating an alert. The easiest way is to manually fail a spare
disk drive, verify the alert sent, and immediately unfail the same drive to return it to
spare state.
2. Click the file name link and select a gz/tar decompression tool to view the ASCII contents of
the bundle.
Coredump management
When DD OS crashes due to a coredump, a core file describing the problem is created in the /
ddvar/core directory. This file may be large, and difficult to copy off the Data Domain system.
If the core file cannot be copied off the Data Domain system because it is too large, run the
support coredump split <filename> by <n> {MiB|GiB} command, where:
l <filename> is the name of the core file in the /ddvar/core directory
l <n> is the number of smaller chunks to break the core file into
Note: A single core file can be broken down into a maximum of 20 chunks. The command
will fail with an error if the specified size would result in more than 20 chunks.
For example, splitting a 42.1 MB core file named cpmdb.core.19297.1517443767 into 10 MB
chunks would result in five chunks.
Run the support coredump save <file-list> command to save specified coredump files to a
USB drive.
During a failover, local historical alerts stay with the node from which they were generated;
however, the historical alerts for the filesystem, replication, and protocols (generally called "logical
alerts") fail over together with the filesystem.
Note: The Health > High Availability panel displays only alerts that are HA-related. Those
alerts can be filtered by major HA component, such as HA Manager, Node, Interconnect,
Storage, and SAS connection.
2. To limit (filter) the entries in the Group Name list, type a group name in the Group Name
box or a subscriber email in the Alert Email box, and click Update.
Note: Click Reset to display all configured groups.
3. To display detailed information for a group, select the group in the Group Name list.
Notification tab
The Notification tab allows you to configure groups of email address that receive system alerts for
the alert types and severity levels you select.
Item Description
Classes The number of alert classes that are reported to the group.
Item Description
Class A service or subsystem that can forward alerts. The listed classes
are those for which the notification group receives alerts.
Severity The severity level that triggers an email to the notification group. All
alerts at the specified severity level and above are sent to the
notification group.
Subscribers The subscribers area displays a list of all email addresses configured
for the notification group.
Control Description
Class Attributes Configure button Click this Configure button to change the
classes and severity levels that generate
alerts for the selected notification group.
Filter By: Alert Email box Enter text in this box to limit the group name
list entries to groups that include an email
address that contains the specified text.
Filter By: Group Name box Enter text in this box to limit the group name
list entries to group names that contain the
specified text.
c. To change the severity level for a class attribute, select a level from the corresponding
list box.
d. Click OK.
CLI equivalent
5. Click OK.
b. Use the list boxes to select the hour, minute, and either AM or PM for the summary
report.
c. Click OK.
CLI equivalent
c. Click Finish.
Item Description
Delivery Time The delivery time shows the configured time for daily emails.
Email List This list displays the email addresses of those who receive the daily
emails.
Control Description
4. In the Notification Groups list, select groups to receive the test email and click Next.
5. Optionally, add additional email addresses to receive the email.
6. Click Send Now and OK.
CLI equivalent
7. If you disabled sending of the test alert to Data Domain and you want to enable this feature
now, do the following.
a. Select Maintenance > Support > Autosupport.
b. In the Alert Support area, click Enable .
Results
To test newly added alerts emails for mailer problems, enter: autosupport test email email-
addr
For example, after adding the email address [email protected] to the list, check the
address with the command: autosupport test email [email protected]
CLI equivalent
Procedure
1. To set up the administrator email, enter:
# config set admin-email [email protected]
The Admin Email is: [email protected]
2. To register the system to the ESRS-gateway (Secure Remote Services), enter:
Log files are rotated weekly. Every Sunday at 0:45 a.m., the system automatically opens new log
files for the existing logs and renames the previous files with appended numbers. For example,
after the first week of operation, the previous week messages file is renamed messages.1, and
new messages are stored in a new messages file. Each numbered file is rolled to the next number
each week. For example, after the second week, the file messages.1 is rolled to messages.2. If
a messages.2 file already existed, it rolls to messages.3. At the end of the retention period
(shown in the table below, the expired log is deleted. For example, an existing messages.9 file is
deleted when messages.8 rolls to messages.9.
The audit.log does not rotate on a weekly basis. Instead, it rotates when the file reaches 70 MB
in size.
Except as noted in this topic, the log files are stored in /ddvar/log.
Note: Files in the /ddvar directory can be deleted using Linux commands if the Linux user is
assigned write permission for that directory.
The set of log files on each system is determined by the features configured on the system and the
events that occur. The following table describes the log files that the system can generate.
cifs.log Log messages from the CIFS subsystem are logged only in 10 weeks
debug/cifs/cifs.log. Size limit of 50 MiB.
space.log Messages about disk space usage by system components, A single file is
and messages from the clean process. A space use message kept
is generated every hour. Each time the clean process runs, it permanently.
creates approximately 100 messages. All messages are in There is no
comma-separated-value format with tags you can use to log file
separate the disk space messages from the clean process rotation for
messages. You can use third-party software to analyze either this log.
set of messages. The log file uses the following tags.
l CLEAN for data lines from clean operations.
l CLEAN_HEADER for lines that contain headers for the
clean operations data lines.
l SPACE for disk space data lines.
l SPACE_HEADER for lines that contain headers for the
disk space data lines.
2. Click a log file name to view its contents. You may be prompted to select an application,
such as Notepad.exe, to open the file.
The display of the messages file is similar to the following. The last message in the
example is an hourly system status message that the Data Domain system generates
automatically. The message reports system uptime, the amount of data stored, NFS
operations, and the amount of disk space used for data storage (%). The hourly
messages go to the system log and to the serial console if one is attached.
# log view
Jun 27 12:11:33 localhost rpc.mountd: authenticated unmount
request from perfsun-g.emc.com:668 for /ddr/col1/segfs (/ddr/
col1/segfs)
Severity levels, in descending order, are: Emergency, Alert, Critical, Error, Warning, Notice, Info,
Debug.
Procedure
1. Go to the Online Support website at https://support.emc.com, enter Error Message
Catalog in the search box, and click the search button.
2. In the results list, locate the catalog for your system and click on the link.
3. User your browser search tool to search for a unique text string in the message.
The error message description looks similar to the following display.
Note: Some web browsers do not automatically ask for a login if a machine does not
accept anonymous logins. In that case, add a user name and password to the FTP line.
For example: ftp://sysadmin:your-pw@Data Domain
system_name.yourcompany.com/
5. At the login pop-up, log into the Data Domain system as user sysadmin.
6. On the Data Domain system, you are in the directory just above the log directory. Open the
log directory to list the messages files.
7. Copy the file that you want to save. Right-click the file icon and select Copy To Folder from
the menu. Choose a location for the file copy.
8. If you want the FTP service disabled on the Data Domain system, after completing the file
copy, use SSH to log into the Data Domain system as sysadmin and invoke the command
adminaccess disable ftp.
The following command adds the system named log-server to the hosts that receive
log messages.
The following command removes the system named log-server from the hosts that
receive log messages.
The following command disables the sending of logs and clears the list of destination
hostnames..
of the port name with the ports in the network interface list. If the rest of the IPMI port name
matches an interface in the network interface list, the port is a shared port. If the rest of the IPMI
port name is different from the names in the network interface list, the port is a dedicated IPMI
port.
Note: DD4200, DD4500, and DD7200 systems are an exception to the naming ruled described
earlier. On these systems, IPMI port, bmc0a, corresponds to shared port ethMa in the network
interface list. If possible, reserve the shared port ethMa for IPMI traffic and system
management traffic (using protocols such as HTTP, Telnet, and SSH). Backup data traffic
should be directed to other ports.
When IPMI and nonIPMI IP traffic share an Ethernet port, if possible, do not use the link
aggregation feature on the shared interface because link state changes can interfere with IPMI
connectivity.
Procedure
1. Select Maintenance > IPMI.
The IPMI Configuration area shows the IPMI configuration for the managed system. The
Network Ports table lists the ports on which IPMI can be enabled and configured. The IPMI
Users table lists the IPMI users who can access the managed system.
Item Description
Port The logical name for a port that supports IPMI communications.
DHCP Whether the port uses DHCP to set its IP address (Yes or No).
Item Description
User Name The name of a user with authority to power manage the remote
system.
5. Enable a disabled IPMI network port by selecting the network port in the Network Ports
table, and clicking Enable.
6. Disable a disabled IPMI network port by selecting the network port in the Network Ports
table, and clicking Disable.
7. Click Apply.
Preparing for remote power management and console monitoring with the CLI
Remote console monitoring uses the Serial Over Lan (SOL) feature to enable viewing of text-
based console output without a serial server. You must use the CLI to set up a system for remote
power management and console monitoring.
About this task
Remote console monitoring is typically used in combination with the ipmi remote power
cycle command to view the remote system’s boot sequence. This procedure should be used on
every system for which you might want to remotely view the console during the boot sequence.
Procedure
1. Connect the console to the system directly or remotely.
l Use the following connectors for a direct connection.
n DIN-type connectors for a PS/2 keyboard
n USB-A receptacle port for a USB keyboard
n DB15 female connector for a VGA monitor
Note: Systems DD4200, DD4500, and DD7200 do not support direct connection,
including KVM.
l For a serial connection, use a standard DB9 male or micro-DB9 female connector.
Systems DD4200, DD4500, and DD7200 provide a female micro-DB9 connector. A null
modem cable with male micro-DB9 and standard female DB9 connectors is included for a
typical laptop connection.
l For a remote IPMI/SOL connection, use the appropriate RJ45 receptacle as follows.
n For DD990 systems, use default port eth0d.
n For other systems, use the maintenance or service port. For port locations, refer to
the system documentation, such as a hardware overview or installation and setup
guide.
6. If this is the first time using IPMI, run ipmi user reset to clear IPMI users that may be
out of synch between two ports, and to disable default users.
7. To add a new IPMI user, enter ipmi user add user.
3. Enter the remote system IPMI IP address or hostname and the IPMI username and
password, then click Connect.
4. View the IPMI status.
The IPMI Power Management dialog box appears and shows the target system identification
and the current power status. The Status area always shows the current status.
Note: The Refresh icon (the blue arrows) next to the status can be used to refresh the
configuration status (for example, if the IPMI IP address or user configuration were
changed within the last 15 minutes using the CLI commands).
4. To disconnect from a remote console monitoring session and return to the command line,
enter the at symbol (@).
5. To terminate remote console monitoring, enter the tilde symbol (~).
2. To view the system uptime and identity information, select Maintenance > System.
The system uptime and identification information appears in the System area.
Column Description
Column Description
Most recent alerts The text of the most recent alert for the
subsystem type specified in the adjacent
column
Column Description
Data Written: Post-compression The data quantity stored on the system after
compression.
Column Description
Left column The left column lists the services that may be
used on the system. These service can include
replication, DD VTL, CIFS, NFS, DD Boost,
vDisk.
Label Description
Label Description
System Serial No. The system serial number is the serial number
assigned to the system. On newer systems,
such as DD4500 and DD7200, the system
serial number is independent of the chassis
serial number and remains the same during
many types of maintenance events, including
chassis replacements. On legacy systems,
such as DD990 and earlier, the system serial
number is set to the chassis serial number.
Label Description
Chassis Serial No. The chassis serial number is the serial number
on the current system chassis.
3. To display additional information for a specific alert in the Details area, click the alert in the
list.
4. To clear an alert, select the alert checkbox in the list and click Clear.
A cleared alert no longer appears in the current alerts list, but it can be found in the alerts
history list.
5. To remove filtering and return to the full listing of current alerts, click Reset.
Item Description
Severity The level of seriousness of the alert. For example, warning, critical,
info, or emergency.
Item Description
Item Description
Severity The level of seriousness of the alert. For example, warning, critical,
info, emergency.
3. To display additional information for a specific alert in the Details area, click the alert in the
list.
4. To remove filtering and return to the full listing of cleared alerts, click Reset.
Item Description
Severity The level of seriousness of the alert. For example, warning, critical,
info, or emergency.
Item Description
Severity The level of seriousness of the alert. For example, warning, critical,
info, emergency,
serial number and remains the same during many types of maintenance events, including chassis
replacements. On legacy systems, such as DD990 and earlier, the system serial number is set to
the chassis serial number.
Procedure
1. Select Hardware > Chassis.
The Chassis view shows the system enclosures. Enclosure 1 is the system controller, and the
rest of the enclosures appear below Enclosure 1.
Components with problems show yellow (warning) or red (error); otherwise, the component
displays OK.
Fan status
Fans are numbered and correspond to their location in the chassis. Hover over a system fan to
display a tooltip for that device.
Item Description
Level The current operating speed range (Low, Medium, High). The
operating speed changes depending on the temperature inside
the chassis.
Temperature status
Data Domain systems and some components are configured to operate within a specific
temperature range, which is defined by a temperature profile that is not configurable. Hover over
the Temperature box to display the temperature tooltip.
Item Description
Description The location within the chassis being measured. The components
listed depend on the model and are often shown as abbreviations.
Some examples are:
l CPU 0 Temp (Central Processing Unit)
l MLB Temp 1 (main logic board)
l BP middle temp (backplane)
l LP temp (low profile of I/O riser FRU)
l FHFL temp (full height full length of I/O riser FRU)
l FP temp (front panel)
Item Description
Item Description
Item Description
Life Used The percentage of the rated operating life the SSD has used.
NVRAM status
Hover over NVRAM to display information about the Non-Volatile RAM, batteries, and other
components.
Item Description
Component The items in the component list depend on the NVRAM installed
in the system and can include the following items.
l Firmware version
l Memory size
l Error counts
l Flash controller error counts
l Board temperature
l CPU temperature
l Battery number (The number of batteries depends on the
system type.)
l Current slot number for NVRAM
Value Values are provided for select components and describe the
following.
l Firmware version number
l Memory size value in the displayed units
l Error counts for memory, PCI, and controller
l Flash controller error counts sorted in the following groups:
configuration errors (Cfg Err), panic conditions (Panic), Bus
Hang, bad block warnings (Bad Blk Warn), backup errors
(Bkup Err), and restore errors (Rstr Err)
l Battery information, such percent charged and status
(enabled or disabled)
Disk
The Disk graph displays the amount of data in the appropriate unit of measurement based on
the data received, such as KiB or MiB per second, going to and from all disks in the system.
Network
The Network graph displays the amount of data in the appropriate unit of measurement based
on the data received, such as KiB or MiB per second, that passes through each Ethernet
connection. One line appears for each Ethernet port.
l In: The total number of units of measurement, such as kilobytes per second, received by
this side from the other side of the DD Replicator pair. For the destination, the value
includes backup data, replication overhead, and network overhead. For the source, the
value includes replication overhead and network overhead.
l Out: The total number of units of measurement, such as kilobytes per second, sent by this
side to the other side of the DD Replicator pair. For the source, the value includes backup
data, replication overhead, and network overhead. For the destination, the value includes
replication and network overhead.
Item Description
Last Login From System from which the user logged in.
Types of reports
The New Report area lists the types of reports you can generate on your system.
Note: Replication reports can only be created if the system contains a replication license and a
valid replication context is configured.
Item Description
Data Written (GiB) The amount of data written before compression. This is
indicated by a purple shaded area on the report.
Time The timeline for data that was written. The time displayed on
this report changes based upon the Duration selection when
the chart was created.
Total Compression Factor The total compression factor reports the compression ratio.
Item Description
Time The date the data was written. The time displayed on this
report changes based upon the Duration selection when the
chart was created.
Usage Trend The dotted black line shows the storage usage trend. When
the line reaches the red line at the top, the storage is almost
full.
Cleaning Cleaning is the Cleaning cycle (start and end time for each
cleaning cycle). Administrators can use this information to
choose the best time for space cleaning the best throttle
setting.
Item Description
Date (or Time for 24 hour The last day of each week, based on the criteria set for the
report) report. In reports, a 24-hour period ranges from noon-to-
noon.
Data Written (Pre-Comp) The cumulative data written before compression for the
specified time period.
Used (Post-Comp) The cumulative data written after compression for the
specified time period.
Compression Factor The total compression factor. This is indicated by a black line
on the report.
Item Description
Space Used (GiB) The amount of space used. Post-comp is red shaded area.
Pre-Comp is purple shaded area.
Item Description
Item Description
Start Date The first day of the week for this summary.
End Date The last day of the week for this summary.
Data (Post -Comp) The cumulative data written before compression for the
specified time period.
Table 81 File System Weekly Capacity Utilization chart label descriptions (continued)
Item Description
Replication (Post-Comp) The cumulative data written after compression for the
specified time period.
Item Description
Item Description
Item Description
Item Description
Item Description
Item Description
Item Description
Network Out (MiB) The amount of data sent from the system. Network Out is
indicated by a thick orange line.
initiated on a remote system, the progress of that task is tracked in the management station task
log, not in the remote system task log.
Procedure
1. Select Health > Jobs.
The Tasks view appears.
2. Select a filter by which to display the Task Log from the Filter By list box. You can select All,
In Progress, Failed, or Completed.
The Tasks view displays the status of all tasks based on the filter you select and refreshes
every 60 seconds.
4. To display detailed information about a task, select the task in the task list.
Item Description
Item Description
Take Node 1 Offline Allows you to take the active node offline if
necessary.
Item Description
Post Time Indicates the time and date the alert was
posted.
When a Data Domain system is mounted, the usual tools for displaying a file system’s physical use
of space can be used.
The Data Domain system generates warning messages as the file system reaches 90%, 95%, and
100% of capacity. The following information about data compression gives guidelines for disk use
over time.
The amount of disk space used over time by a Data Domain system depends on:
l The size of the initial full backup.
l The number of additional backups (incremental and full) retained over time.
l The rate of growth of the backup dataset.
l The change rate of data.
For data sets with typical rates of change and growth, data compression generally matches the
following guidelines:
l For the first full backup to a Data Domain system, the compression factor is generally 3:1.
l Each incremental backup to the initial full backup has a compression factor generally in the
range of 6:1.
l The next full backup has a compression factor of about 60:1.
Over time, with a schedule of weekly full and daily incremental backups, the aggregate
compression factor for all the data is about 20:1. The compression factor is lower for incremental-
only data or for backups with less duplicate data. Compression is higher when all backups are full
backups.
Types of compression
Data Domain compresses data at two levels: global and local. Global compression compares
received data to data already stored on disks. Duplicate data does not need to be stored again,
while data that is new is locally compressed before being written to disk.
Local Compression
A Data Domain system uses a local compression algorithm developed specifically to maximize
throughput as data is written to disk. The default algorithm (lz) allows shorter backup windows for
backup jobs but uses more space. Two other types of local compression are available, gzfast and
gz. Both provide increased compression over lz, but at the cost of additional CPU load. Local
compression options provide a trade-off between slower performance and space usage. It is also
possible to turn off local compression. To change compression, see Changing local compression on
page 202.
After you change the compression, all new writes use the new compression type. Existing data is
converted to the new compression type during cleaning. It takes several rounds of cleaning to
recompress all of the data that existed before the compression change.
The initial cleaning after the compression change might take longer than usual. Whenever you
change the compression type, carefully monitor the system for a week or two to verify that it is
working properly.
End-to-end verification
End-to-end checks protect all file system data and metadata. As data comes into the system, a
strong checksum is computed. The data is deduplicated and stored in the file system. After all data
is flushed to disk, it is read back, and re-checksummed. The checksums are compared to verify
that both the data and the file system metadata are stored correctly.
How the file system reclaims storage space with file system cleaning
When your backup application (such as NetBackup or NetWorker) expires data, the data is marked
by the Data Domain system for deletion. However, the data is not deleted immediately; it is
removed during a cleaning operation.
l During the cleaning operation, the file system is available for all normal operations including
backup (write) and restore (read).
l Although cleaning uses a significant amount of system resources, cleaning is self-throttling and
gives up system resources in the presence of user traffic.
l Data Domain recommends running a cleaning operation after the first full backup to a Data
Domain system. The initial local compression on a full backup is generally a factor of 1.5 to 2.5.
An immediate cleaning operation gives additional compression by another factor of 1.15 to 1.2
and reclaims a corresponding amount of disk space.
l When the cleaning operation finishes, a message is sent to the system log giving the
percentage of storage space that was reclaimed.
A default schedule runs the cleaning operation every Tuesday at 6 a.m. (tue 0600). You can
change the schedule or you can run the operation manually (see the section regarding modifying a
cleaning schedule).
Data Domain recommends running the cleaning operation once a week.
Any operation that disables the file system, or shuts down a Data Domain system during a cleaning
operation (such as a system power-off or reboot) aborts the cleaning operation. The cleaning
operation does not immediately restart when the system restarts. You can manually restart the
cleaning operation or wait until the next scheduled cleaning operation.
With collection replication, data in a replication context on the source system that has not been
replicated cannot be processed for file system cleaning. If file system cleaning is not able to
complete because the source and destination systems are out of synch, the system reports the
status of the cleaning operation as partial, and only limited system statistics are available for
the cleaning operation. If collection replication is disabled, the amount of data that cannot be
processed for file system cleaning increases because the replication source and destination
systems remain out of synch. The KB article Data Domain: An overview of Data Domain File System
(DDFS) clean/garbage collection (GC) phases, available from the Online Support site at https://
support.emc.com provides additional information.
With MTree replication, If a file is created and deleted while a snapshot is being replicated, then
the next snapshot will not have any information about this file, and the system will not replicate
any content associated with this file. Directory replication will replicate both the create and delete,
even though they happen close to each other.
With the replication log that directory replication uses, operations like deletions, renaming, and so
on, execute as a single stream. This can reduce the replication throughput. The use of snapshots
by MTree replication avoids this problem.
Supported interfaces
Interfaces supported by the file system.
l NFS
l CIFS
l DD Boost
l DD VTL
slower responses to metadata operations such as listing the files in the directory and opening
or creating a file.
Unavailable
Unavailable
operation more often. Also consider reducing the data retention period or splitting off a portion
of the backup data to another Data Domain system.
l Available (GiB)—The total amount of space available for data storage. This figure can change
because an internal index may expand as the Data Domain system fills with data. The index
expansion takes space from the Avail GiB amount.
l Pre-Compression (GiB)—Data written before compression.
l Total Compression Factor (Reduction %)—Pre-Comp / Post-Comp.
l Cleanable (GiB)—The amount of space that could be reclaimed if a cleaning were run.
For Cloud Tier, the Cloud File Recall field contains a Recall link to initiate a file recall from the
Cloud Tier. A Details link is available if any active recalls are underway. For more information, see
the "Recalling a File from the Cloud Tier" topic.
Separate panels provide the following statistics for the last 24 hours for each tier:
l Pre-Compression (GiB)—Data written before compression.
l Post-Compression (GiB)—Storage used after compression.
l Global Compression Factor—(Pre-Compression / (Size after global compression).
l Local Compression Factor—(Size after global compression) / Post-Compression).
l Total Compression Factor (Reduction %)—[(Pre-Comp - Post-Comp) / Pre-Comp] * 100.
Cloud Tier Local Comp The type of compression in use for the cloud tier.
l See the section regarding types of compression for an
overview.
l See the section regarding changing local compression
Marker Type Backup software markers (tape markers, tag headers, or other
names are used) in data streams. See the section regarding
tape marker settings
You can adjust the workload balance of the file system to increase performance based on your
usage.
Table 92 Workload Balance settings
Random workloads (%) Instant access and restores perform better using random
workloads.
Sequential workloads (%) Traditional backups and restores perform better with sequential
workloads.
File Age Threshold When data movement starts, all files that have not been
modified for the specified threshold number of days will be
moved from the active to the retention tier.
Throttle The percentage of available resources the system uses for data
movement. A throttle value of 100% is the default throttle and
means that data movement will not be throttled.
Setting Description
Encryption Progress View encryption status details for the active tier regarding the
application of changes and re-encryption of data. Status can be
one of the following:
l None
l Pending
l Running
l Done
Click View Details to display the Encryption Status Details dialog
that includes the following information for the Active Tier:
l Type (Example: Apply Changes when encryption has already
been initiated, or Re-encryption when encryption is a result of
compromised data-perhaps a previously destroyed key.)
l Status (Example: Pending)
l Details: (Example: Requested on December xx/xx/xx and will
take after the next system clean).
Setting Description
Key Management
Key Manager Either the internal Data Domain Embedded Key Manager or the
optional RSA Data Protection Manager (DPM) Key Manager. Click
Configure to switch between key managers (if both are
configured), or to modify Key Manager options.
Server Status Online or offline, or the error messages returned by the RSA Key
Manager Server.
Key Class A specialized type of security class used by the optional RSA Data
Protection Manager (DPM) Key Manager that groups
crytopgraphic keys with similar characteristics. The Data Domain
system retrieves a key from the RSA server by key class. A key
class to be set up to either return the current key, or to generate a
new key each time.
Note: The Data Domain system supports only key classes
configured to return the current key.
FIPS mode Whether or not the imported host certificate is FIPS compliant. The
default mode is enabled.
Encryption Keys Lists keys by ID numbers. Shows when a key was created, how long
it is valid, its type (RSA DPM Key Manager or the Data Domain
internal key), its state (see Working with the RSA DPM Key
Manager, DPM Encryption Key States Supported by Data Domain),
and the amount of the data encrypted with the key. The system
displays the last updated time for key information above the right
column. Selected keys in the list can be:
l Synchronized so the list shows new keys added to the RSA
server (but are not usable until the file system is restarted).
l Deleted.
Setting Description
l Destroyed.
Procedure
1. Select Data Management > File System > Summary > Destroy.
2. In the Destroy File System dialog box, enter the sysadmin password (it is the only accepted
password).
3. Optionally, click the checkbox for Write zeros to disk to completely remove data.
4. Click OK.
Performing cleaning
This section provides information about cleaning and describes how to start, stop, and modify
cleaning schedules.
DD OS attempts to maintain a counter called 'Cleanable GiB' for the active tier. This number is an
estimation of how much physical (postcomp) space could potentially be reclaimed in the active tier
by running clean/garbage collection. This counter is shown using the filesys show space and
df commands.
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- --------- --------- ----
--------------
/data: pre-comp - 7259347.5 - - -
/data: post-comp 304690.8 251252.4 53438.5 82% 51616.1 <=== NOTE
/ddvar 29.5 12.5 15.6 44% -
---------------- -------- --------- --------- ----
--------------
Starting cleaning
To immediately start a cleaning operation.
Procedure
1. Select Data Managment > File System > Summary > Settings > Cleaning.
The Cleaning tab of the File System Setting dialog displays the configurable settings for
each tier.
2. For the active tier:
a. In the Throttle % text box, enter a system throttle amount. This is the percentage of
CPU usage dedicated to cleaning. The default is 50 percent.
b. In the Frequency drop-down list, select one of these frequencies: Never, Daily, Weekly,
Biweekly, and Monthly. The default is Weekly.
c. For At, configure a specific time.
d. For On, select a day of the week.
3. For the cloud tier:
a. In the Throttle % text box, enter a system throttle amount. This is the percentage of
CPU usage dedicated to cleaning. The default is 50 percent.
b. In the Frequency drop-down list, select one of these frequencies: Never, After every 'N'
Active Tier cleans.
Note: If a cloud unit is inaccessible when cloud tier cleaning runs, the cloud unit is
skipped in that run. Cleaning on that cloud unit occurs in the next run if the cloud
unit becomes available. The cleaning schedule determines the duration between two
runs. If the cloud unit becomes available and you cannot wait for the next scheduled
run, you can start cleaning manually.
4. Click Save.
Note:
To start the cleaning operation using the CLI, use the filesys clean start
command.
# filesys clean start
Cleaning started. Use 'filesys clean watch' to monitor progress.
Note: If clean is not able to start, contact the contracted support provider for further
assistance. This issue may indicate that the system has encountered a missing
segment error, causing clean to be disabled.
If necessary, set an active tier clean schedule. The following example sets cleaning to
run every Tuesday at 6 AM:
# filesys clean set schedule Tue 0600
Filesystem cleaning is scheduled to run "Tue" at "0600".
On systems that are configured with Extended Retention (ER), clean may be configured
to run after data movement completes and may not have its own separate schedule.
Performing sanitization
To comply with government guidelines, system sanitization, also called data shredding, must be
performed when classified or sensitive data is written to any system that is not approved to store
such data.
When an incident occurs, the system administrator must take immediate action to thoroughly
eradicate the data that was accidentally written. The goal is to effectively restore the storage
device to a state as if the event never occurred. If the data leakage is with sensitive data, the
entire storage will need to be sanitized using Data Domain Professional Services' Secure Data
erasure practice.
The Data Domain sanitization command exists to enable the administrator to delete files at the
logical level, whether a backup set or individual files. Deleting a file in most file systems consists of
just flagging the file or deleting references to the data on disk, freeing up the physical space to be
consumed at a later time. However, this simple action introduces the problem of leaving behind a
residual representation of underlying data physically on disks. Deduplicated storage environments
are not immune to this problem.
Shredding data in a system implies eliminating the residual representation of that data and thus the
possibility that the file may be accessible after it has been shredded. Data Domain's sanitization
approach ensures is compliant with the 2007 versions of Department of Defense (DoD) 5220.22 of
the following specifications:
l US Department of Defense 5220.22-M Clearing and Sanitization Matrix
l National Institute of Systems and Technology (NIST) Special Publication 800-88 Guidelines for
Media Sanitization
ensure that related files on that image are reconciled, catalog records are managed as
required, and so forth.
2. Run the system sanitize start command on the contaminated Data Domain system
to cause all previously used space in it to be overwritten once (see the figure below).
3. Wait for the affected system to be sanitized. Sanitization can be monitored by using the
system sanitize watch command.
If the affected Data Domain system has replication enabled, all the systems containing
replicas need to be processed in a similar manner. Depending on how much data exists in the
system and how it is distributed, the system sanitize command could take some time.
However, during this time, all clean data in the system is available to users.
Procedure
1. Select Data Managment > File System > Summary > Settings > General.
2. From the Local Compression Type drop-down list, select a compression type.
Option Description
LZ The default algorithm that gives the best throughput. Data Domain
recommends the lz option.
GZFAST A zip-style compression that uses less space for compressed data, but more
CPU cycles (twice as much as lz). Gzfast is the recommended alternative
for sites that want more compression at the cost of lower performance.
GZ A zip-style compression that uses the least amount of space for data
storage (10% to 20% less than lz on average; however, some datasets get
Option Description
much higher compression). This also uses the most CPU cycles (up to five
times as much as lz). The gz compression type is commonly used for
nearline storage applications in which performance requirements are low.
3. Click Save.
2. In the Staging Reserve area, toggle between Disabled and Enabled as appropriate.
3. If Staging Reserve is enabled, enter a value in the % of Total Space box.
This value represents the percentage of the total disk space to be reserved for disk staging,
typically 20 to 30%.
4. Click Save.
3. In the Destination text box, enter the pathname of the directory where the data will be
copied to. For example, /data/col1/backup/dir2. This destination directory must be
empty, or the operation fails.
l If the Destination directory exists, click the checkbox Overwrite existing destination if
it exists.
4. Click OK.
5. In the progress dialog box that appears, click Close to exit.
l MTrees overview.................................................................................................................208
l Monitoring MTree usage...................................................................................................... 215
l Managing MTree operations................................................................................................ 219
MTrees overview
An MTree is a logical partition of the file system.
You can use MTrees in the following ways: for DD Boost storage units, DD VTL pools, or an NFS/
CIFS share. MTrees allow granular management of snapshots, quotas, and DD Retention Lock. For
systems that have DD Extended Retention and granular management of data migration policies
from Active Tier to Retention Tier, MTree operations can be performed on a specific MTree as
opposed to the entire file system.
Note:
There can be up to the maximum configurable MTrees designated for MTree replication
contexts.
Do not place user files in the top-level directory of an MTree.
MTree limits
MTree limits for Data Domain systems
All other DD systems 5.7 and later 100 Up to 32 based on the model
Quotas
MTree quotas apply only to the logical data written to the MTree.
An administrator can set the storage space restriction for an MTree, Storage Unit, or DD VTL pool
to prevent it from consuming excess space. There are two kinds of quota limits: hard limits and
soft limits. You can set either a soft or hard limit or both a soft and hard limit. Both values must be
integers, and the soft value must be less than the hard value.
When a soft limit is set, an alert is sent when the MTree size exceeds the limit, but data can still be
written to it. When a hard limit is set, data cannot be written to the MTree when the hard limit is
reached. Therefore, all write operations fail until data is deleted from the MTree.
See Configure MTree quotas on page 220 for more information.
Quota enforcement
Enable or disable quota enforcement.
Item Description
Last 24 Hr Pre-Comp (pre- Amount of raw data from the backup application that has been
compression) written in the last 24 hours.
Last 24 Hr Post-Comp Amount of storage used after compression in the last 24 hours.
(post-compression)
Last 24 hr Comp Ratio The compression ratio for the last 24 hours.
Weekly Avg Post-Comp Average amount of compressed storage used in the last five
weeks.
Last Week Post-Comp Average amount of compressed storage used in the last seven
days.
Weekly Avg Comp Ratio The average compression ratio for the last five weeks.
Last Week Comp Ratio The average compression ratio for the last seven days.
Item Description
Pre-Comp Used The current amount of raw data from the backup application
that has been written to the MTree.
Quota
Pre-Comp Soft Limit Current value. Click Configure to revise the quota limits.
Pre-Comp Hard Limit Current value. Click Configure to revise the quota limits.
Protocols
DD Boost Storage Unit The DD Boost export status. Status can be:
l Yes—The MTree is exported.
l No—This MTree is not exported.
Item Description
l Unknown—There is no information.
Click the DD Boost link to go to the DD Boost view.
DD VTL Pool VTL pool report status. Status can be:
l Yes— The MTree is a DD VTL MTree pool.
l No— The MTree is not a DD VTL MTree pool.
l Unknown— There is no information.
Physical Capacity
Measurements
Used (Post-Comp) MTree space that is used after compressed data has been
ingested.
Last Measurement Time Last time the system measured the MTree.
Submitted Measurements Displays the post compression status for the MTree.
Item Description
l Total Snapshots
l Expired
l Unexpired
l Oldest Snapshot
l Newest Snapshot
l Next Scheduled
l Assigned Snapshot Schedules
Click Total Snapshots to go to the Data Management >
Snapshots view.
Click Assign Schedules to configure snapshot schedules.
Item Description
Status The status of the MTree replication pair. Status can be Normal,
Error, or Warning.
Sync As Of The last day and time the replication pair was synchronized.
Item Description
Total Snapshots The total number of snapshots created for this MTree. A total
of 750 snapshots can be created for each MTree.
Expired The number of snapshots in this MTree that have been marked
for deletion, but have not been removed with the clean
operation as yet.
Unexpired The number of snapshots in this MTree that are marked for
keeping.
Oldest Snapshot The date of the oldest snapshot for this MTree.
Newest Snapshot The date of the newest snapshot for this MTree.
Next Scheduled The date of the next scheduled snapshot.
Assigned Snapshot The name of the snapshot schedule assigned to this MTree.
Schedules
Item Description
Retention period min Indicates the minimum DD Retention Lock time period.
Retention period max Indicates the maximum DD Retention Lock time period.
a. Type a number for the interval in the text box (for example, 5 or 14).
b. From the drop-down list, select an interval (minutes, hours, days, years).
Note: Specifying a minimum retention period of less than 12 hours, or a
maximum retention period longer than 70 years, results in an error.
5. Select how often the schedule triggers a measurement occurrence: every Day, Week, or
Month.
l For Day, select the time.
l For Week, select the time and day of the week.
l For Month, select the time, and days during the month.
6. Select MTree assignments for the schedule (the MTrees that the schedule will apply to):
7. Click Create.
8. Optionally, click on the heading names to sort by schedule: Name, Status (Enabled or
Disabled) Priority (Urgent or Normal), Schedule (schedule timing), and MTree
Assignments (the number of MTrees the schedule is assigned to).
4. Optionally, click the heading names to sort by schedule: Name, Status (Enabled or
Disabled) Priority (Urgent or Normal), Schedule (schedule timing), and MTree
Assignments (the number of MTrees the schedule is assigned to).
Procedure
1. Select Data Management > MTree > Summary.
2. Select MTrees to assign schedules to.
3. Scroll down to the Physical Capacity Measurements area and click Assign to the right of
Schedules.
4. Select schedules to assign to the MTree and click Assign.
4. Click Save.
Creating an MTree
An MTree is a logical partition of the file system. Use MTrees in for DD Boost storage units, DD
VTL pools, or an NFS/CIFS share.
About this task
MTrees are created in the area /data/col1/mtree_name.
Procedure
1. Select Data Management > MTree.
2. In the MTree overview area, click Create.
3. Enter the name of the MTree in the MTree Name text box. MTree names can be up to 50
characters. The following characters are acceptable:
l Upper- and lower-case alphabetical characters: A-Z, a-z
l Numbers: 0-9
l Embedded space
l comma (,)
l period (.), as long as it does not precede the name.
l explanation mark (!)
l number sign (#)
l dollar sign ($)
l per cent sign (%)
l plus sign (+)
l at sign (@)
l equal sign (=)
l ampersand (&)
l semi-colon (;)
l parenthesis [(and)]
l square brackets ([and])
l curly brackets ({and})
l caret (^)
l tilde (~)
l apostrophe (unslanted single quotation mark)
l single slanted quotation mark (‘)
4. Set storage space restrictions for the MTree to prevent it from consuming excessive space.
Enter a soft or hard limit quota setting, or both. With a soft limit, an alert is sent when the
MTree size exceeds the limit, but data can still be written to the MTree. Data cannot be
written to the MTree when the hard limit is reached.
Note: When setting both soft and hard limits, a quota’s soft limit cannot exceed the
quota’s hard limit.
5. Click OK.
The new MTree displays in the MTree table.
Note: You may need to expand the width of the MTree Name column to see the entire
pathname.
3. In the MTree tab, click the Summary tab, and then click the Configure button in the Quota
area.
4. In the Quota tab, click the Configure Quota button.
2. Click OK.
Deleting an MTree
Removes the MTree from the MTree table. The MTree data is deleted at the next cleaning.
About this task
Note: Because the MTree and its associated data are not removed until file cleaning is run, you
cannot create a new MTree with the same name as a deleted MTree until the deleted MTree is
completely removed from the file system by the cleaning operation.
Procedure
1. Select Data Management > MTree.
2. Select an MTree.
3. In the MTree overview area, click Delete.
4. Click OK at the Warning dialog box.
5. Click Close in the Delete MTree Status dialog box after viewing the progress.
Undeleting an MTree
Undelete retrieves a deleted MTree and its data and places it back in the MTree table.
About this task
An undelete of an MTree retrieves a deleted MTree and its data and places it back in the MTree
table.
An undelete is possible only if file cleaning has not been run after the MTree was marked for
deletion.
Note: You can also use this procedure to undelete a storage unit.
Procedure
1. Select Data Management > MTree > More Tasks > Undelete.
2. Select the checkboxes of the MTrees you wish to bring back and click OK.
3. Click Close in the Undelete MTree Status dialog box after viewing the progress.
The recovered MTree displays in the MTree table.
Renaming an MTree
Use the Data Management MTree GUI to rename MTrees.
Procedure
1. Select Data Management > MTree.
2. Select an MTree in the MTree table.
3. Select the Summary tab.
4. In the Detailed Information overview area, click Rename.
5. Enter the name of the MTree in the New MTree Name text box.
See the section about creating an MTree for a list of allowed characters.
6. Click OK.
Snapshots overview
This chapter describes how to use the snapshot feature with MTrees.
A snapshot saves a read-only copy (called a snapshot) of a designated MTree at a specific time.
You can use a snapshot as a restore point, and you can manage MTree snapshots and schedules
and display information about the status of existing snapshots.
Note: Snapshots created on the source Data Domain system are replicated to the destination
with collection and MTree replication. It is not possible to create snapshots on a Data Domain
system that is a replica for collection replication. It is also not possible to create a snapshot on
the destination MTree of MTree replication. Directory replication does not replicate the
snapshots, and it requires you to create snapshots separately on the destination system.
Snapshots for the MTree named backup are created in the system directory /data/col1/
backup/.snapshot. Each directory under /data/col1/backup also has a .snapshot
directory with the name of each snapshot that includes the directory. Each MTree has the same
type of structure, so an MTree named SantaClara would have a system directory /data/col1/
SantaClara/.snapshot, and each subdirectory in /data/col1/SantaClara would have
a .snapshot directory as well.
Note: The .snapshot directory is not visible if only /data is mounted. When the MTree itself
is mounted, the .snapshot directory is visible.
An expired snapshot remains available until the next file system cleaning operation.
The maximum number of snapshots allowed per MTree is 750. Warnings are sent when the number
of snapshots per MTree reaches 90% of the maximum allowed number (from 675 to 749
snapshots), and an alert is generated when the maximum number is reached. To clear the warning,
expire snapshots and then run the file system cleaning operation.
Note: To identify an MTree that is nearing the maximum number of snapshots, check the
Snapshots panel of the MTree page regarding viewing MTree snapshot information.
Snapshot retention for an MTree does not take any extra space, but if a snapshot exists and the
original file is no longer there, the space cannot be reclaimed.
Note: Snapshots and CIFS Protocol: As of DD OS 5.0, the .snapshot directory is no longer
visible in the directory listing in Windows Explorer or DOS CMD shell. You can access
the .snapshot directory by entering its name in the Windows Explorer address bar or the
DOS CMD shell. For example, \\dd\backup\.snapshot or Z:\.snapshot when Z: is
mapped as \\dd\backup).
Field Description
Total Snapshots (Across The total number of snapshots, active and expired, on all MTrees
all MTrees) in the system.
Expired The number of snapshots that have been marked for deletion, but
have not been removed with the cleaning operation as yet.
Unexpired The number of snapshots that are marked for keeping.
Next file system clean The date the next scheduled file system cleaning operation will be
scheduled performed.
Snapshots view
View snapshot information by name, by MTree, creation time, whether it is active, and when it
expires.
The Snapshots tab displays a list of snapshots and lists the following information.
Field Description
Selected Mtree A drop-down list that selects the MTree the snapshot operates on.
Filter By Items to search for in the list of snapshots that display. Options
are:
l Name—Name of the snapshot (wildcards are accepted).
l Year—Drop-down list to select the year.
Status The status of the snapshot, which can be Expired or blank if the
snapshot is active.
Schedules view
View the days snapshots will be taken, the times, the time they will be retained, and the naming
convention.
Field Description
Field Description
1. Select a schedule in the Schedules tab. The Detailed Information area appears listing the
MTrees that share the same schedule with the selected MTree.
2. Click the Add/Remove button to add or remove MTrees from schedule list.
Managing snapshots
This section describes how to manage snapshots.
Creating a snapshot
Create a snapshot when an unscheduled snapshot is required.
About this task
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. In the Snapshots view, click Create.
3. In the Name text field, enter the name of the snapshot.
4. In the MTree(s) area, select a checkbox of one or more MTrees in the Available MTrees
panel and click Add.
5. In the Expiration area, select one of these expiration options:
a. Never Expire.
b. Enter a number for the In text field, and select Days, Weeks, Month, or Years from the
drop-down list. The snapshot will be retained until the same time of day as when it is
created.
c. Enter a date (using the formatmm/dd/yyyy) in the On text field, or click Calendar and
click a date. The snapshot will be retained until midnight (00:00, the first minute of the
day) of the given date.
6. Click OK and Close.
Note: More than one snapshot can be selected by clicking additional checkboxes.
3. In the Expiration area, select one of the following for the expiration date:
a. Never Expire.
b. In the In text field, enter a number and select Days, Weeks, Month, or Years from the
drop-down list. The snapshot will be retained until the same time of day as when it is
created.
c. In the On text field, enter a date (using the format mm/dd/yyyy) or click Calendar and
click a date. The snapshot will be retained until midnight (00:00, the first minute of the
day) of the given date.
4. Click OK.
Renaming a snapshot
Use the Snapshot tab to rename a snapshot.
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. Select the checkbox of the snapshot entry in the list and click Rename.
3. In the Name text field, enter a new name.
4. Click OK.
Expiring a snapshot
Snapshots cannot be deleted. To release disk space, expire snapshots and they will be deleted in
the next cleaning cycle after the expiry date.
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. Click the checkbox next to snapshot entry in the list and click Expire.
Note: More than one snapshot can be selected by selecting additional checkboxes.
The snapshot is marked as Expired in the Status column and will be deleted at the next
cleaning operation.
10. Review the parameters in the schedule summary and click Finish to complete the schedule
or Back to change any entries.
11. If an MTree is not associated with the schedule, a warning dialog box asks if you would like
to add an MTree to the schedule. Click OK to continue (or Cancel to exit).
12. To assign an MTree to the schedule, in the MTree area, click the checkbox of one or more
MTrees in the Available MTrees panel, then click Add and OK.
CIFS overview
Common Internet File System (CIFS) clients can have access to the system directories on the
Data Domain system.
l The /data/col1/backup directory is the destination directory for compressed backup
server data.
l The /ddvar/core directory contains Data Domain System core and log files (remove old logs
and core files to free space in this area).
Note: You can also delete core files from the /ddvar or the /ddvar/ext directory if it
exists.
Clients, such as backup servers that perform backup and restore operations with a Data Domain
System, at the least, need access to the /data/col1/backup directory. Clients that have
administrative access need to be able to access the /ddvar/core directory to retrieve core and
log files.
As part of the initial Data Domain system configuration, CIFS clients were configured to access
these directories. This chapter describes how to modify these settings and how to manage data
access using the Data DD Manager and the cifs command.
Note:
l The DD System Manager Protocols > CIFS page allows you to perform major CIFS
operations such as enabling and disabling CIFS, setting authentication, managing shares,
and viewing configuration and share information.
l The cifs command contains all the options to manage CIFS backup and restores between
Windows clients and Data Domain systems, and to display CIFS statistics and status. For
complete information about the cifs command, see the Data Domain Operating System
Command Reference Guide.
l For information about the initial system configuration, see the Data Domain Operating
System Initial Configuration Guide.
l For information about setting up clients to use the Data Domain system as a server, see
the related tuning guide, such as the CIFS Tuning Guide, which is available from the
support.emc.com web site. Search for the complete name of the document using the
Search field.
Note: A log level of 5 degrades system performance. Click the Default in the Log Level
area after debugging an issue. This sets the level back to 1.
Item Description
Directory Path The path to the target directory (for example, /data/col1/
backup/dir1).
Note: col1 uses the lower case letter L followed by the
number 1.
7. Add a client by clicking Add (+) in the Clients area. The Client dialog box is displayed. Enter
the name of the client in the Client text box and click OK.
Consider the following when entering the client name.
l No blanks or tabs (white space) characters are enabled.
l It is not recommended to use both an asterisk (*) and individual client name or IP
address for a given share. When an asterisk (*) is present, any other client entries for
that share are not used.
l It is not required to use both client name and client IP address for the same client on a
given share. Use client names when the client names are defined in the DNS table.
l To make share available to all clients, specify an asterisk (*) as the client. All users in the
client list can access the share, unless one or more user names are specified, in which
case only the listed names can access the share.
Repeat this step for each client that you need to configure.
8. In the Max Connections area, select the text box and enter the maximum number of
connections to the share that are enabled at one time. The default value of zero (also
settable through the Unlimited button) enforces no limit on the number of connections.
9. Click OK.
The newly created share is displayed at the end of the list of shares, which are located in the
center of the Shares panel.
CLI equivalent
Procedure
1. Run the cifs status command to verify that CIFS is enabled.
2. Run the filesys status command to verify that file system is enabled.
3. Run the hostname command to determine the system hostname.
4. Create the CIFS share.
cifs share create <share> path <path> {max-connections <max
connections> | clients <clients> | users <users> | comment
<comment>}
# cifs share create backup path /backup
8. From the Windows system, select Start > Run, and type the hostname and directory of the
CIFS share.
\\<DDhostname>.<DDdomain.com>\<sharename>
9. If there are problems connecting to the CIFS share, run the cifs share show command
to verify the status of the share.
The warning WARNING: The share path does not exist! is displayed if the share
does not exist or was misspelled on creation.
# cifs share show
--------------- share backup ---------------
enabled: yes
path: /backup
10. If the CIFS share is still not accessible, verify that all client information is in the access list,
and all network connections are functional.
Note: To make the share available to all clients, specify an asterisk (*) as the client.
All users in the client list can access the share, unless one or more user names are
specified, in which case only the listed names can access the share.
d. Click OK.
5. In the Max Connections area, in the text box, change the maximum number of connections
to the share that are allowed at one time. Or select Unlimited to enforce no limit on the
number of connections.
6. Click OK.
Procedure
1. In the CIFS Shares tab, click the checkbox for the share you wish to use as the source.
2. Click Create From.
3. Modify the share information, as described in the section about modifying a share on a Data
Domain system.
3. Click OK.
The shares are removed.
4. Enter the path for the Folder to share, for example, enter C:\data\col1\backup
\newshare.
5. Enter the Share name, for example, enter newshare. Click Next.
6. For the Share Folder Permissions, selected Administrators have full access. Other users
have read-only access. Click Next.
Figure 8 Completing the Create a Shared Folder Wizard
7. The Completing dialog shows that you have successfully shared the folder with all Microsoft
Windows clients in the network. Click Finish.
The newly created shared folder is listed in the Computer Management dialog box.
# \\dd02\backup /USER:dd02\backup22
This command maps the backup share from Data Domain system dd02 to drive H on
the Windows system and gives the user named backup22 access to the \\DD_sys
\backup directory.
File access
This sections contains information about ACLs, setting DACL and SACL permissions using
Windows Explorer, and so on.
Note: CREATOR OWNER is replaced by the user creating the file/folder for normal users and
by Administrators for administrative users.
Permissions for a New Object when the Parent Directory Has No ACL
The permissions are as follows:
l BUILTIN\Administrators:(OI)(CI)F
l NT AUTHORITY\SYSTEM:(OI)(CI)F
l CREATOR OWNER:(OI)(CI)(IO)F
l BUILTIN\Users:(OI)(CI)R
l BUILTIN\Users:(CI)(special access:)FILE_APPEND_DATA
l BUILTIN\Users:(CI)(IO)(special access:)FILE_WRITE_DATA
l Everyone:(OI)(CI)R
These permissions are described in more detail as follows:
Item Description
Max Open Files Maximum number of open files on a Data Domain system
Item Description
Authentication configuration
The information in the Authentication panel changes, depending on the type of authentication that
is configured.
Click the Configure link in to the left of the Authentication label in the Configuration tab. The
system will navigate to the Administration > Access > Authentication page where you can
configure authentication for Active Directory, Kerberos, Workgroups, and NIS.
Active directory configuration
Item Description
CIFS Server Name The name of the configured CIFS server displays.
WINS Server Name The name of the configured WINS server displays.
Workgroup configuration
Item Description
CIFS Server Name The name of the configured CIFS server displays.
WINS Server Name The name of the configured WINS server displays.
Item Description
Directory Path The directory path to the share (for example, /data/col1/
backup/dir1).
Note: col1 uses the lower case letter L followed by the
number 1.
l To list information about a specific share, enter the share name in the Filter by Share Name
text box and click Update.
l Click Update to return to the default list.
l To page through the list of shares, click the < and > arrows at the bottom right of the view to
page forward or backward. To skip to the beginning of the list, click |< and to skip to the end,
click >|.
l Click the Items per Page drop-down arrow to change the number of share entries listed on a
page. Choices are 15, 30, or 45 entries.
Item Description
Item Description
Directory Path The directory path to the share (for example, /data/col1/
backup/dir1).
Note: col1 uses the lower case letter L followed by the
number 1.
Directory Path Status Indicates whether the configured directory path exists on the
DDR. Possible values are Path Exists or Path Does Not Exist,
the later indicating an incorrect or incomplete CIFS
configuration.
l The Clients area lists the clients that are configured to access the share, along with a client
tally beneath the list.
l The User/Groups area lists the names and type of users or groups that are configured to
access the share, along with a user or group tally beneath the list.
l The Options area lists the name and value of configured options.
Results
::ffff:10.25.132. ddve-25179109\sysadmin 1 92 0
84
ddve-25179109\sysadmin 1 0 C:\data\col1\backup
96 GB 600 30,000
NFS overview
Network File System (NFS) clients can have access to the system directories or MTrees on the
Data Domain system.
l The/backup directory is the default destination for non-MTree compressed backup server
data.
l The /data/col1/backup path is the root destination when using MTrees for compressed
backup server data.
l The /ddvar/core directory contains Data Domain System core and log files (remove old logs
and core files to free space in this area).
Note: On Data Domain systems, the /ddvar/core is on a separate partition. If you
mount /ddvar only, you will not be able to navigate to /ddvar/core from the /ddvar
mountpoint.
Clients, such as backup servers that perform backup and restore operations with a Data Domain
System, need access to the /backup or /data/col1/backup areas. Clients that have
administrative access need to be able to access the /ddvar/core directory to retrieve core and
log files.
As part of the initial Data Domain system configuration, NFS clients were configured to access
these areas. This chapter describes how to modify these settings and how to manage data access.
Note:
l For information about the initial system configuration, see the Data Domain Operating
System Initial Configuration Guide.
l The nfs command manages backups and restores between NFS clients and Data Domain
systems, and it displays NFS statistics and status. For complete information about the nfs
command, see the Data Domain Operating System Command Reference Guide.
l For information about setting up third-party clients to use the Data Domain system as a
server, see the related tuning guide, such as the Solaris System Tuning, which is available
from the Data Domain support web site. From the Documentation > Integration
Documentation page, select the vendor from the list and click OK. Select the tuning guide
from the list.
2. Click Enable.
2. Click Disable.
Creating an export
You can use Data Domain System Manager’s Create button on the NFS view or use the
Configuration Wizard to specify the NFS clients that can access the /backup, /data/col1/
backup,/ddvar, /ddvar/core areas, or the/ddvar/ext area if it exists.
About this task
A Data Domain system supports a maximum of 2048 exports2, with the number of connections
scaling in accordance with system memory.
Note: You have to assign client access to each export separately and remove access from each
export separately. For example, a client can be removed from /ddvar and still have access
to /data/col1/backup.
CAUTION If Replication is to be implemented, a single destination Data Domain system can
receive backups from both CIFS clients and NFS clients as long as separate directories or
MTrees are used for each. Do not mix CIFS and NFS data in the same area.
Procedure
1. Select ProtocolsNFS.
The NFS view opens displaying the Exports tab.
2. Click Create.
3. Enter the pathname in the Directory Path text box (for example, /data/col1/backup/
dir1).
4. In the Clients area, select an existing client or click the + icon to create a client.
The Client dialog box is displayed.
Anonymous UID/GID:
l Map requests from UID (user identifier) or GID (group identifier) 0 to the anonymous
UID/GID (root _squash).
l Map all user requests to the anonymous UID/GID (all _squash).
l Use Default Anonymous UID/GID.
c. Click OK.
5. Click OK to create the export.
Modifying an export
Change the directory path, domain name, and other options using the GUI.
Procedure
1. SelectProtocols > NFS.
The NFS view opens displaying the Exports tab.
Anonymous UID/GID:
l Map requests from UID (user identifier) or GID (group identifier) 0 to the anonymous
UID/GID (root _squash).
l Map all user requests to the anonymous UID/GID (all _squash).
l Use Default Anonymous UID/GID.
c. Click OK.
6. Click OK to modify the export.
Deleting an export
Delete an export from the NFS Exports tab.
Procedure
1. In the NFS Exports tab, click the checkbox of the export you wish to delete.
2. Click Delete.
3. Click OK and Close to delete the export.
2. Click an export in the table to populate the Detailed Information area, below the Exports
table.
In addition to the export’s directory path, configured options, and status, the system
displays a list of clients.
Use the Filter By text box to sort by mount path.
Click Update for the system to refresh the table and use the filters supplied.
Click Reset for the system to clear the Path and Client filters.
2. Configure NFS principal (node) for the DDR on the Key Distribution Center (KDC).
Example:
addprinc nfs/hostname@realm
Note: Hostname is the name for the DDR.
3. Verify that there are nfs entries added as principals on the KDC.
Example:
listprincs
nfs/hostname@realm
6. Copy the keytab file from the location where the keys for NFS DDR are generated to the
DDR in the /ddvar/ directory.
7. Set the realm on the DDR, using the following DDR command:
authentication kerberos set realm <home realm> kdc-type <unix, windows.>
kdcs <IP address of server>
8. When the kdc-type is UNIX, import the keytab file from /ddvar/ to /ddr/etc/, where the
Kerberos configuration file expects it. Use the following DDR command to copy the file:
authentication kerberos keytab import
NOTICE This step is required only when the kdc-type is UNIX.
11. For each NFS client, import all its principals into a keytab file on the client.
Example:
ktadd -k <keytab_file> host/hostname@realm
ktadd -k <keytab_file> nfs/hostname@realm
This command joins the system to the krb5.test realm and enables Kerberos authentication
for NFS clients.
Note: A keytab generated on this KDC must exist on the DDR to authenticate using
Kerberos.
2. Verify the Kerberos authentication configuration.
authentication kerberos show config
Home Realm: krb5.test
KDC List: nfskrb-kdc.krb5.test
KDC Type: unix
l Introduction to NFSv4.........................................................................................................262
l ID Mapping Overview.......................................................................................................... 263
l External formats..................................................................................................................263
l Internal Identifier Formats................................................................................................... 264
l When ID mapping occurs.....................................................................................................264
l NFSv4 and CIFS/SMB Interoperability................................................................................266
l NFS Referrals......................................................................................................................267
l NFSv4 and High Availability.................................................................................................268
l NFSv4 Global Namespaces..................................................................................................268
l NFSv4 Configuration...........................................................................................................269
l Kerberos and NFSv4............................................................................................................ 271
l Enabling Active Directory.................................................................................................... 273
Introduction to NFSv4
Because NFS clients are increasingly using NFSv4.x as the default NFS protocol level, Data
Domain systems can now employ NFSv4 instead of requiring the client to work in a backwards-
compatibility mode.
In Data Domain systems, clients can work in mixed environments in which NFSv4 and NFSv3 must
be able to access the same NFS exports.
The Data Domain NFS server can be configured to support NFSv4 and NFSv3, depending on site
requirements. You can make each NFS export available to only NFSv4 clients, only NFSv3 clients,
or both.
Several factors might affect whether you choose NFSv4 or NFSv3:
l NFS client support
Some NFS clients may support only NFSv3 or NFSv4, or may operate better with one version.
l Operational requirements
An enterprise might be strictly standardized to use either NFSv4 or NFSv3.
l Security
If you require greater security, NFSv4 provides a greater security level than NFSv3, including
ACL and extended owner and group configuration.
l Feature requirements
If you need byte-range locking or UTF-8 files, you should choose NFSv4.
l NFSv3 submounts
If your existing configuration uses NFSv3 submounts, NFSv3 might be the appropriate choice.
NFSv4 ports
You can enable or disable NFSv4 and NFSv3 independently. In addition, you can move NFS
versions to different ports; both versions do not need to occupy the same port.
With NFSv4, you do not need to restart the Data Domain file system if you change ports. Only an
NFS restart is required in such instances.
Like NFSv3, NFSv4 runs on Port 2049 as the default if it is enabled.
NFSv4 does not use portmapper (Port 111) or mountd (Port 2052).
ID Mapping Overview
NFSv4 identifies owners and groups by a common external format, such as [email protected].
These common formats are known as identifiers, or IDs.
Identifiers are stored within an NFS server and use internal representations such as ID 12345 or ID
S-123-33-667-2. The conversion between internal and external identifiers is known as ID mapping.
Identifiers are associated with the following:
l Owners of files and directories
l Owner groups of files and directories
l Entries in Access Control Lists (ACLs)
Data Domain systems use a common internal format for NFS and CIFS/SMB protocols, which
allows files and directories to be shared between NFS and CIFS/SMB. Each protocol converts the
internal format to its own external format with its own ID mapping.
External formats
The external format for NFSv4 identifiers follows NFSv4 standards (for example, RFC-7530 for
NFSv4.0). In addition, supplemental formats are supported for interoperability.
See client-specific documentation you have for setting the client NFS domain. Depending on the
operating system, you might need to update a configuration file (for example, /etc/
idmapd.conf) or use a client administrative tool.
Note: If you do not set the default value, it will follow the DNS name for the Data Domain
system.
Note: The filesystem must be restarted after changing the DNS domain for the nfs4-domain to
automatically update.
Alternative formats
To allow interoperability, NFSv4 servers on Data Domain systems support some alternative
identifier formats for input and output.
l Numeric identifiers; for example, “12345”.
l Windows compatible Security identifiers (SIDs) expressed as “S-NNN-NNN-…”
See the sections on input mapping and output mapping for more information about restrictions to
these formats.
l Credential mapping
The RPC client credentials are mapped to an internal identity for access control and other
operations. See Credential mapping on page 265.
Input mapping
Input mapping occurs when an NFSv4 client sends an identifier to the Data Domain NFSv4 server -
setting up the owner or owner-group of a file, for example. Input mapping is distinct from
credential mapping. For more information on credential mapping, see xxxx
Standard format identifiers such as [email protected] are converted into an internal UID/GID
based on the configured conversion rules. If NFSv4 ACLs are enabled, a SID will also be generated,
based on the configured conversion rules.
Numeric identifiers (for example, “12345”) are directly converted into corresponding UID/GIDs if
the client is not using Kerberos authentication. If Kerberos is being used, an error will be generated
as recommended by the NFSv4 standard. If NFSv4 ACLs are enabled, a SID will be generated
based on the conversion rules.
Windows SIDs (for example, “S-NNN-NNN-…”) are validated and directly converted into the
corresponding SIDs. A UID/GID will be generated based on the conversion rules.
Output mapping
Output mapping occurs when the NFSv4 server sends an identifier to the NFSv4 client; for
example, if the server returns the owner or owner-group of a file.
1. If configured, the output might be the numeric ID.
This can be useful for NFSv4 clients that are not configured for ID mapping (for example, some
Linux clients).
2. Mapping is attempted using the configured mapping services, (for example, NIS or Active
Directory).
3. The output is a numeric ID or SID string if mapping fails and the configuration is allowed.
4. Otherwise, nobody is returned.
The nfs option nfs4-idmap-out-numeric configures the mapping on output:
l If nfs option nfs4-idmap-out-numeric is set to map-first, mapping will be attempted. On
error, a numeric string is output if allowed. This is the default.
l If nfs option nfs4-idmap-out-numeric is set to always, output will always be a numeric
string if allowed.
l If nfs option nfs4-idmap-out-numeric is set to never, mapping will be attempted. On
error, nobody@nfs4-domain is the output.
If the RPC connection uses GSS/Kerberos, a numeric string is never allowed and
nobody@nfs4-domain is the output.
The following example configures the Data Domain NFS server to always attempt to output a
numeric string on output. For Kerberos the name nobody is returned:
nfs option set nfs4-idmap-out-numeric always
Credential mapping
The NFSv4 server provides credentials for the NFSv4 client.
These credentials perform the following functions:
l Determine the access policy for the operation; for example, the ability to read a file.
l Determine the default owner and owner-group for new files and directories.
Credentials sent from the client may be [email protected], or system credentials such as
UID=1000, GID=2000. System credentials specify a UID/GID along with auxiliary group IDs.
If NFSv4 ACLs are disabled, then the UID/GID and auxiliary group IDs are used for the credentials.
If NFSv4 ACLs are enabled, then the configured mapping services are used to build an extended
security descriptor for the credentials:
l SIDs for the owner, owner-group, and auxiliary group mapped and added to the Security
Descriptor (SD).
l Credential privileges, if any, are added to the SD.
For example, a user with UID 1234 would have an owner SID of S-1-22-1-1234.
NFS Referrals
The referral feature allows an NFSv4 client to access an export (or filesystem) in one or multiple
locations. Locations can be on the same NFS server or on different NFS servers, and use either
the same or different path to reach the export.
Because referrals are an NFSv4 feature, they apply only to NFSv4 mounts.
Referrals can be made to any server that uses NFSv4 or later, including the following:
l A Data Domain system running NFS with NFSv4 enabled
l Other servers that support NFSv4 including Linux servers, NAS appliances, and VNX systems.
A referral can use an NFS export point with or without a current underlying path in the Data
Domain filesystem.
NFS exports with referrals can be mounted through NFSv3, but NFSv3 clients will not be
redirected since referrals are a NFSv4 feature. This characteristic is useful in scaleout systems to
allow exports to be redirected at a file-management level.
Referral Locations
NFSv4 referrals always have one or more locations.
These locations consist of the following:
l A path on a remote NFS server to the referred filesystem.
l One or more server network addresses that allow the client to reach the remote NFS server.
Typically when multiple server addresses are associated with the same location, those addresses
are found on the same NFS server.
Note: You can include spaces as long as those spaces are embedded within the name. If you
use embedded spaces, you must enclose the entire name in double quotes.
Names that begin with "." are reserved for automatic creation by the Data Domain system. You
can delete these names but you cannot create or modify them using the command line interface
(CLI) or system management services (SMS).
If NFSv3 has a main export and a submount export, these exports might use the same
NFSv3 clients yet have different levels of access:
NFSv4 operates in the same manner in regard to highest-level export paths. For
NFSv4, client1.example.com navigates the NFSv4 PseudoFS until it reaches the
highest-level export path, /data/col1/mt1, where it gets read-only access.
However, because the export has been selected, the submount export (Mt1-sub) is
not part of the PseudoFS for the client and read-write access is not given.
Best practice
If your system uses NFSv3 exports submounts to give the client read-write access based on the
mount path, you must consider this before using NFSv4 with these submount exports.
With NFSv4, each client has an individual PseudoFS.
NFSv4 Configuration
The default Data Domain system configuration only enables NFSv3. To use NFSv4, you must first
enable the NFSv4 server.
To ensure all existing clients have either version 3, 4, or both, you can modify the NFS
version to the appropriate string. The following example shows NFS modified to include
versions 3 and 4:
#nfs export modify all clients all options version=3:4
For more information about the nfs export command, see the Data Domain Operating
System Command Reference Guide for more information.
You employ existing commands that are used for NFSv3 when configuring your system for
Kerberos. See the nfsv3 chapter of the Data Domain Command Reference Guide for more
information.
3. Copy the keytab file to the Data Domain system at the following location:
/ddr/var/krb5.keytab
4. Create one of the following principals for the client and export that principal to the keytab
file:
nfs/<client_dns_name>@<REALM>
root/<client_dns_name>@<REALM>
/etc/krb5.keytab
Note: It is recommended that you use an NTP server to keep the time synchronized on
all entities.
4. (Optional) Make the nfs4-domain the same as the Kerberos realm using the nfs option
command:
nfs option set nfs4-domain <kerberos-realm>
5. Add a client to an existing export by adding sec=krb5 to the nfs export add command:
nfs export add <export-name> clients * options version=4,sec=krb5
Configuring Clients
Procedure
1. Configure the DNS server and verify that forward and reverse lookups are working.
2. Configure the KDC and Kerberos realm by editing the /etc/krb5.conf configuration file.
You might need to perform this step based on the client operating system you are using.
3. Configure NIS or another external name mapping service.
4. (Optional) Edit the /etc/idmapd.conf file to ensure it is the same as the Kerberos realm.
You might need to perform this step based on the client operating system you are using.
5. Verify the keytab file /etc/krb5.keytab contains an entry for the nfs/ service principal or
the root/ principal.
[root@fc22 ~]# klist -k
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
----
--------------------------------------------------------------------------
3 nfs/fc22.domain-name@domain-name
6. Mount the export using the sec=krb5 option.
[root@fc22 ~]# mount ddr12345.<domain-name>:/data/col1/mtree1 /mnt/nfs4 –
o sec=krb5,vers=4
Kerberos is automatically set up on the Data Domain system. the required nfs/ service
principal is automatically created on the KDC.
2. Configure NIS using the authentication nis command:
# authentication nis servers add <windows-ad-server>
# authentication nis domain set <ad-realm>
# authentication nis enable
NIS Domains
NIS Domain in AD Master server NIS Domain in UNIX
---------------- ------------- ----------------
corp win-ad-server corp
4. Assign AD users and groups UNIX UID/GIDs for the NFSv4 server.
a. Go to Server Manager > Tools > Active Directory.
b. Open the Properties for an AD user or group.
c. Under the UNIX Atributes tab, fill in the NIS domain, UID, and Primary GID fields.
n The capacity and shelf type license for the destination enclosures
l Storage migration is based on capacity, not enclosure count. Therefore:
n One source enclosure can be migrated to one destination enclosure.
n One source enclosure can be migrated to multiple destination enclosures.
n Multiple source enclosures can be migrated to one destination enclosure.
n Multiple source enclosures can be migrated to multiple destination enclosures.
l The storage migration licensing process consists of:
1. Updating the elicense installed on the system with the storage migration feature license and
the capacity and shelf type license for the destination enclosures before running the
migration operation.
2. Updating the elicense installed on the system to remove the original capacity and shelf type
license and the storage migration feature license after the migration operation is complete.
l The destination enclosures must:
n Be unassigned shelves with the drives in an unused state.
n Be licensed for sufficient capacity to receive the data from the source enclosures, with the
license installed on the system
n Be supported on the DD system model.
n Contain at least as much usable capacity as the enclosures they are replacing.
Note: It is not possible to determine the utilization of the source shelf. The system
performs all calculations based on the capacity of the shelf.
l The DD system model must have sufficient memory to support the active tier storage capacity
of the new enclosures.
l Data migration is not supported for disks in the system controller.
l CAUTION Do not upgrade DD OS until the in-progress storage migration is complete.
l Storage migration cannot start when the file system is disabled or while a DD OS upgrade is in
progress, another migration is in progress, or a RAID reconstruction is in progress.
Note: If a storage migration is in progress, a new storage migration license is required to
start a new storage migration operation after the in-progress migration completes. The
presence or absence of a storage migration license is reported as part of the upgrade
precheck.
l All specified source enclosures must be in the same tier (active or archive).
l There can be only one disk group in each source enclosure, and all disks in the disk group must
be installed in within the same enclosure.
l All disks in each destination enclosure must be of the same type (for example, all SATA or all
SAS).
l After migration begins, the destination enclosures cannot be removed.
l Source enclosures cannot be removed until migration is complete and finalized.
l The storage migration duration depends on the system resources (which differ for different
system models), the availability of system resources, and the data quantity to migrate. Storage
migration can take days or weeks to complete.
cabinet. Due to the weight of the shelves, approximately 225 lbs when fully loaded, read this
section before proceeding with a storage migration to DS60 shelves.
Be aware of the following considerations when working with the DS60 shelf:
CAUTION
l Loading shelves at the top of the rack may cause the shelf to tip over.
l Validate that the floor can support the total weight of the DS60 shelves.
l Validate that the racks can provide enough power to the DS60 shelves.
l When adding more than five DS60s in the first rack, or more than six DS60s in the second
rack, stabilizer bars and a ladder are required to maintain the DS60 shelves.
3. When a storage migration is in progress, you can also view the status by selecting Health >
Jobs.
The Add Licenses button allows you to add storage licenses for the new enclosures as
needed, without interrupting the current task.
8. In the Review Migration Plan dialog, review the estimated migration schedule, then click
Next.
9. Review the precheck results in the Verify Migration Preconditions dialog, then click Close.
Results
If any of the precheck tests fail, resolve the issue before you start the migration.
8. In the Review Migration Plan dialog, review the estimated migration schedule, then click
Start.
9. In the Start Migration dialog, click Start.
The Migrate dialog appears and updates during the three phases of the migration: Starting
Migration, Migration in Progress, and Copy Complete.
10. When the Migrate dialog title displays Copy Complete and a filesystem restart is acceptable,
click Finalize.
Note: This task restarts the filesystem and typically takes 10 to 15 minutes. The system
is unavailable during this time.
Results
When the migration finalize task is complete, the system is using the destination enclosures and
the source enclosures can be removed.
P4. The current migration request is the same as the interrupted migration request.
Resume and complete the interrupted migration.
P8. Source enclosures are in the same active tier or retention unit.
The system supports storage migration from either the active tier or the retention tier. It does
not support migration of data from both tiers at the same time.
You can click Pause to suspend the migration and later click Resume to continue the migration.
The Low, Medium, and High buttons define throttle settings for storage migration resource
demands. A low throttle setting gives storage migration a lower resource priority, which results in a
slower migration and requires fewer system resources. Conversely, A high throttle setting gives
storage migration a higher resource priority, which results in a faster migration and requires more
system resources. The medium setting selects an intermediate priority.
You do not have to leave this dialog open for the duration of the migration. To check the status of
the migration after closing this dialog, select Hardware > Storage and view the migration status.
To return to this dialog from the Hardware/Storage page, click Manage Migration. The migration
progress can also be viewed by selecting Health > Jobs.
Migrate - Copy Complete
When the copy is complete, the migration process waits for you to click Finalize. During this final
stage, , which takes 10 to 15 minutes, the filesystem is restarted and the system is not available. It
is a good practice to start this stage during a maintenance window or a period of low system
activity.
The source disks should be in the active state, and the destination disks should be in the
unknown state.
5. Run the storage migration precheck command to determine if the system is ready for the
migration.
# storage migration precheck source-enclosures 7:2 destination-enclosures
7:4
8. Optionally, view the disk states for the source and destination disks during the migration.
# disk show state
During the migration, the source disks should be in the migrating state, and the destination
disks should be in the destination state.
9. Review the migration status as needed.
# storage migration status
10. View the disk states for the source and destination disks.
# disk show state
During the migration, the source disks should be in the migrating state, and the destination
disks should be in the destination state.
11. When the migration is complete, update the configuration to use the destination enclosures.
Note: This task restarts the file system and typically takes 10 to 15 minutes. The system
is unavailable during this time.
storage migration finalize
12. If you want to remove all data from each of the source enclosures, remove the data now.
storage sanitize start enclosure <enclosure-id>[:<pack-id>]
Note: The storage sanitize command does not produce a certified data erasure. Data
Domain offers certified data erasure as a service. For more information, contact your
Data Domain representative.
13. View the disk states for the source and destination disks.
# disk show state
After the migration, the source disks should be in the unknown state, and the destination
disks should be in the active state.
Results
When the migration finalize task is complete, the system is using the destination storage and the
source storage can be removed.
elicense update
# elicense update mylicense.lic
New licenses: Storage Migration
Feature licenses:
## Feature Count Mode Expiration Date
-- ----------- ----- --------------- ---------------
1 REPLICATION 1 permanent (int) n/a
2 VTL 1 permanent (int) n/a
3 Storage Migration 1 permanent (int)
-- ----------- ----- --------------- ---------------
** This will replace all existing Data Domain licenses on the system with the above EMC ELMS
licenses.
Do you want to proceed? (yes|no) [yes]: yes
eLicense(s) updated.
Source enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
-------- ----- ----- ---------- --------- --------------
2.1-2.15 15 dg1 1.81 TiB ES30 APM00111103820
-------- ----- ----- ---------- --------- --------------
Total source disk size: 27.29 TiB
Destination enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
---------- ----- ------- -------- --------- --------------
11.1-11.15 15 unknown 931.51 GiB ES30 APM00111103840
---------- ----- ------- -------- --------- --------------
Total destination disk size: 13.64 TiB
Source enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
-------- ----- ----- ---------- --------- --------------
2.1-2.15 15 dg1 1.81 TiB ES30 APM00111103820
-------- ----- ----- ---------- --------- --------------
Total source disk size: 27.29 TiB
Destination enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
---------- ----- ------- -------- --------- --------------
11.1-11.15 15 unknown 931.51 GiB ES30 APM00111103840
---------- ----- ------- -------- --------- --------------
Total destination disk size: 13.64 TiB
Note: Currently storage migration is only supported on the active node. Storage migration is
not supported on the standby node of an HA cluster.
Caching the file system metadata on SSDs improves I/O performance for both traditional and
random workloads.
For traditional workloads, offloading random access to metadata from HDDs to SSDs allows the
hard drives to accommodate streaming write and read requests.
For random workloads, SSD cache provides low latency metadata operations, which allows the
HDDs to serve data requests instead of cache requests.
Read cache on SSD improves random read performance by caching frequently accessed data.
Writing data to NVRAM combined with low latency metadata operations to drain the NVRAM
faster improve random write latency. The absence of cache does not prevent file system
operation, it only impacts file system performance.
When the cache tier is first created, a file system restart is only required if the cache tier is being
added after the file system is running. For new systems that come with cache tier disks, no file
system restart is required if the cache tier is created before enabling the file system for the first
time. Additional cache can be added to a live system, without the need to disable and enable the
file system.
Note: DD9500 systems that were upgraded from DD OS 5.7 to DD OS 6.0 require a one-time
file system restart after creating the cache tier for the first time.
One specific condition with regard to SSDs is when the number of spare blocks remaining gets
close to zero, the SSD enters a read only condition. When a read only condition occurs, DD OS
treats the drive as read-only cache and sends an alert.
MDoF is supported on the following Data Domain systems:
l DD6300
l DD6800
l DD9300
l DD9500
l DD9800
l DD VE instances, including DD3300 systems, in capacity configurations of 16 TB and higher
(SSD Cache Tier for DD VE)
96 GB (Expanded) 2 1600 GB
DD VE 16 TB 160 GB
DD VE 32 TB 320 GB
DD VE 48 TB 480 GB
DD VE 64 TB 640 GB
DD VE 96 TB 960 GB
DD3300 8 TB 160 GB
DD3300 16 TB 160 GB
DD3300 32 TB 320 GB
l When SSDs are deployed within a controller, those SSDs are treated as internal root drives.
They display as enclosure 1 in the output of the storage show all command.
l Manage individual SSDs with the disk command the same way HDDs are managed.
l Run the storage add command to add an individual SSD or SSD enclosure to the SSD cache
tier.
l The SSD cache tier space does not need to be managed. The file system draws the required
storage from the SSD cache tier and shares it among its clients.
l The filesys create command creates an SSD volume if SSDs are available in the system.
Note: If SSDs are added to the system later, the system should automatically create the
SSD volume and notify the file system. SSD Cache Manager notifies its registered clients
so they can create their cache objects.
l If the SSD volume contains only one active drive, the last drive to go offline will come back
online if the active drive is removed from the system.
The next section describes how to manage the SSD cache tier from Data Domain System
Manager, and with the DD OS CLI.
CLI Equivalent
When the cache tier SSDs are installed in the head unit:
a. Add the SSDs to the cache tier.
# storage add disks 1.13,1.14 tier cache
Checking storage requirements...done
Adding disk 1.13 to the cache tier...done
Figure 16
SSD alerts
There are three alerts specific to the SSD cache tier.
The SSD cahce tier alerts are:
l Licensing
If the file system is enabled and less physical cache capacity present than what the license
permits is configured, an alert is generated with the current SSD capacity present, and the
capacity license. This alert is classified as a warning alert. The absence of cache does not
prevent file system operation, it only impacts file system performance. Additional cache can be
added to a live system, without the need to disable and enable the file system.
l Read only condition
When the number of spare blocks remaining gets close to zero, the SSD enters a read only
condition. When a read only condition occurs, DD OS treats the drive as read-only cache.
Alert EVT-STORAGE-00001 displays when the SSD is in a read-only state and should be
replaced.
l SSD end of life
When an SSD reaches the end of its lifespan, the system generates a hardware failure alert
identifying the location of the SSD within the SSD shelf. This alert is classified as a critical
alert.
Alert EVT-STORAGE-00016 displays when the EOL counter reaches 98. The drive is failed
proactively when the EOL counter reaches 99.
The thin protocol is a lightweight daemon for VDisk and DD VTL that responds to SCSI commands
when the primary protocol can't. For Fibre Channel environments with multiple protocols, thin
protocol:
l Prevents initiator hangs
l Prevents unnecessary initiator aborts
l Prevents initiator devices from disappearing
l Supports a standby mode
l Supports fast and early discoverable devices
l Enhances protocol HA behavior
l Doesn't require fast registry access
For More Information about DD Boost and the scscitarget Command (CLI)
For more information about using DD Boost through the DD System Manager, see the related
chapter in this book. For other types of information about DD Boost, see the Data Domain Boost for
OpenStorage Administration Guide.
This chapter focuses on using SCSI Target through the DD System Manager. After you have
become familiar with basic tasks, the scscitarget command in the Data Domain Operating
System Command Reference Guide provides more advanced management tasks.
When there is heavy DD VTL traffic, avoid running the scsitarget group use command,
which switches the in-use endpoint lists for one or more SCSI Target or vdisk devices in a group
between primary and secondary endpoint lists.
Enabling NPIV
NPIV (N_Port ID Virtualization), is a Fibre Channel feature in which multiple endpoints can share a
single physical port. NPIV eases hardware requirements and provides endpoint failover/failback
capabilities. NPIV is not configured by default; you must enable it.
About this task
Note: NPIV is enabled by default in HA configuration.
Note: After NPIV is enabled, the "Secondary System Address" must be specified at each
of the endpoints. If not, endpoint failover will not occur.
l Multiple DD systems can be consolidated into a single DD system, however, the number of
HBAs remains the same on the single DD system.
l The endpoint failover is triggered when FC-SSM detects when a port goes from online to
offline. In the case where the physical port is offline before scsitarget is enabled and the port is
still offline after scsitarget is enabled, a endpoint failover is not possible because FC-SSM does
not generate a port offline event. If the port comes back online and auto-failback is enabled,
any failed over endpoints that use that port as a primary port will fail-back to the primary port.
The Data Domain HA features requires NPIV to move WWNs between the nodes of an HA pair
during the failover process.
Note: Before enabling NPIV, the following conditions must be met:
l The DD system must be running DD OS 5.7.
l All ports must be connected to 4Gb, 8Gb, and 16 Gb Fibre Channel HBA and SLIC.
l The DD system ID must be valid, that is, it must not be 0.
In addition, port topologies and port names will be reviewed and may prevent NPIV from being
enabled:
l NPIV is allowed if the topology for all ports is loop-preferred.
l NPIV is allowed if the topology for some of the ports is loop-preferred; however, NPIV
must be disabled for ports that are loop-only, or you must reconfigure the topology to
loop-preferred for proper functionality.
l NPIV is not allowed if none of the ports has a topology of loop-preferred.
l If port names are present in access groups, the port names are replaced with their
associated endpoint names.
Procedure
1. Select Hardware > Fibre Channel.
2. Next to NPIV: Disabled, select Enable.
3. In the Enable NPIV dialog, you will be warned that all Fibre Channel ports must be disabled
before NPIV can be enabled. If you are sure that you want to do this, select Yes.
CLI Equivalent
a. Make sure (global) NPIV is enabled.
# scsitarget transport option show npiv
SCSI Target Transport Options
Option Value
------ --------
npiv disabled
------ --------
b. If NPIV is disabled, then enable it. You must first disable all ports.
# scsitarget port disable all
All ports successfully disabled.
# scsitarget transport option set npiv enabled
Enabling FiberChannel NPIV mode may require SAN zoning to
be changed to configure both base port and NPIV WWPNs.
Any FiberChannel port names used in the access groups will
be converted to their corresponding endpoint names in order
to prevent ambiguity.
Do you want to continue? (yes|no) [no]:
Disabling NPIV
Before you can disable NPIV, you must not have any ports with multiple endpoints.
About this task
Note: NPIV is required for HA configuration. It is enabled by default and cannot be disabled.
Procedure
1. Select Hardware > Fibre Channel.
Resources tab
The Hardware > Fibre Channel > Resources tab displays information about ports, endpoints, and
initiators.
Item Description
Link Status Link status: either Online or Offline; that is, whether or not
the port is up and capable of handling traffic.
Item Description
Link Status Either Online or Offline; that is, whether or not the port is up
and capable of handling traffic.
Item Description
Item Description
Configuring a port
Ports are discovered, and a single endpoint is automatically created for each port, at startup.
About this task
The properties of the base port depend on whether NPIV is enabled:
l In non-NPIV mode, ports use the same properties as the endpoint, that is, the WWPN for the
base port and the endpoint are the same.
l In NPIV mode, the base port properties are derived from default values, that is, a new WWPN
is generated for the base port and is preserved to allow consistent switching between NPIV
modes. Also, NPIV mode provides the ability to support multiple endpoints per port.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Ports, select an port, and then select Modify (pencil).
3. In the Configure Port dialog, select whether to automatically enable or disable NPIV for this
port.
4. For Topology, select Loop Preferred, Loop Only, Point to Point, or Default.
5. For Speed, select 1, 2, 4, 8, or 16 Gbps, or auto.
6. Select OK.
Enabling a port
Ports must be enabled before they can be used.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Ports > Enable. If all ports are already enabled, a message to that
effect is displayed.
3. In the Enable Ports dialog, select one or more ports from the list, and select Next.
4. After the confirmation, select Next to complete the task.
Disabling a port
You can simply disable a port (or ports), or you can chose to failover all endpoints on the port (or
ports) to another port.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Ports > Disable.
3. In the Disable Ports dialog, select one or more ports from the list, and select Next.
4. In the confirmation dialog, you can continue with simply disabling the port, or you can chose
to failover all endpoints on the ports to another port.
Adding an endpoint
An endpoint is a virtual object that is mapped to a underlying virtual port. In non-NPIV mode (not
available on HA configuration), only a single endpoint is allowed per physical port, and the base
port is used to configure that endpoint to the fabric. When NPIV is enabled, multiple endpoints are
allowed per physical port, each using a virtual (NPIV) port, and endpoint failover/failback is
enabled.
About this task
Note: Non-NPIV mode is not available on HA configurations. NPIV is enabled by default and
cannot be disabled.
Note: In NPIV mode, endpoints:
l have a primary system address.
l may have zero or more secondary system addresses.
l are all candidates for failover to an alternate system address on failure of a port; however,
failover to a marginal port is not supported.
l may be failed back to use their primary port when the port comes back up online.
Note: When using NPIV, it is recommended that you use only one protocol (that is, DD VTL
Fibre Channel, DD Boost-over-Fibre Channel, or vDisk Fibre Channel) per endpoint. For
failover configurations, secondary endpoints should also be configured to have the same
protocol as the primary.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Endpoints, select Add (+ sign).
3. In the Add Endpoint dialog, enter a Name for the endpoint (from 1 to 128 characters). The
field cannot be empty or be the word “all,” and cannot contain the characters asterisk (*),
question mark (?), front or back slashes (/, \), or right or left parentheses [(,)].
4. For Endpoint Status, select Enabled or Disabled.
5. If NPIV is enabled, for Primary system address, select from the drop-down list. The primary
system address must be different from any secondary system address.
6. If NPIV is enabled, for Fails over to secondary system addresses, check the appropriate box
next to the secondary system address.
7. Select OK.
Configuring an endpoint
After you have added an endpoint, you can modify it using the Configure Endpoint dialog.
About this task
Note: When using NPIV, it is recommended that you use only one protocol (that is, DD VTL
Fibre Channel, DD Boost-over-Fibre Channel, or vDisk Fibre Channel) per endpoint. For
failover configurations, secondary endpoints should also be configured to have the same
protocol as the primary.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Endpoints, select an endpoint, and then select Modify (pencil).
3. In the Configure Endpoint dialog, enter a Name for the endpoint (from 1 to 128 characters).
The field cannot be empty or be the word “all,” and cannot contain the characters asterisk
(*), question mark (?), front or back slashes (/, \), or right or left parentheses [(,)].
4. For Endpoint Status, select Enabled or Disabled.
5. For Primary system address, select from the drop-down list. The primary system address
must be different from any secondary system address.
6. For Fails over to secondary system addresses, check the appropriate box next to the
secondary system address.
7. Select OK.
4. Modify the endpoint you want to use, ep-1, by assigning it the new system address 10a:
# scsitarget endpoint modify ep-1 system-address 10a
5. Enable all endpoints:
# scsitarget endpoint enable all
Enabling an endpoint
Enabling an endpoint enables the port only if it is currently disabled, that is, you are in non-NPIV
mode.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Enable. If all endpoints are already enabled, a message to
that effect is displayed.
3. In the Enable Endpoints dialog, select one or more endpoints from the list, and select Next.
4. After the confirmation, select Next to complete the task.
Disabling an endpoint
Disabling an endpoint does not disable the associated port, unless all endpoints using the port are
disabled, that is, you are in non- NPIV mode.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Disable.
3. In the Disable Endpoints dialog, select one or more endpoints from the list, and select Next.
If an endpoint is in use, you are warned that disabling it might disrupt the system.
4. Select Next to complete the task.
Deleting an endpoint
You may want to delete an endpoint if the underlying hardware is no longer available. However, if
the underlying hardware is still present, or becomes available, a new endpoint for the hardware is
discovered automatically and configured based on default values.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Delete.
3. In the Delete Endpoints dialog, select one or more endpoints from the list, and select Next.
If an endpoint is in use, you are warned that deleting it might disrupt the system.
4. Select Next to complete the task.
Adding an initiator
Add initiators to provide backup clients to connect to the system to read and write data using the
FC (Fibre Channel) protocol. A specific initiator can support DD Boost over FC, or DD VTL, but not
both. A maximum of 1024 initiators can be configured for a DD system.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Initiators, select Add (+ sign)
3. In the Add Initiator dialog, enter the port’s unique WWPN in the specified format.
4. Enter a Name for the initiator.
5. Select the Address Method: Auto is used for standard addressing, and VSA (Volume Set
Addressing) is used primarily for addressing virtual buses, targets, and LUNs.
6. Select OK.
CLI Equivalent
Item Description
DD OS 5.1 up to 5.3
If a port is offline, an alert notifies you that the link is down. This alert is managed, which means it
stays active until cleared. This occurs when the DD VTL FC port is online or disabled. If the port is
not in use, disable it unless it needs to be monitored.
DD OS 5.0 up to 5.1
If a port is offline, an alert notifies you that the link is down. The alert is not managed, which means
it does not stay active and does not appear in the current alerts list. When the port is online, an
alert notifies you that the link is up. If the port is not in use, disable it unless it needs to be
monitored.
DD OS 4.9 up to 5.0
An FC port must be included in a DD VTL group to be monitored.
3. To select an existing user, select the user name in the drop-down list.
If possible, select a user name with management role privileges set to none.
4. To create and select a new user, select Create a new Local User and do the following:
a. Enter the new user name in the User field.
The user must be configured in the backup application to connect to the Data Domain
system.
4. Click Remove.
After removal, the user remains in the DD OS access list.
Enabling DD Boost
Use the DD Boost Settings tab to enable DD Boost and to select or add a DD Boost user.
Procedure
1. Select Protocols > DD Boost.
2. Click Enable in the DD Boost Status area.
The Enable DD Boost dialog box is displayed.
3. Select an existing user name from the menu, or add a new user by supplying the name,
password, and role.
Configuring Kerberos
You can configure Kerberos by using the DD Boost Settings tab.
Procedure
1. Select Protocols > DD Boost > Settings.
2. Click Configure in the Kerberos Mode status area.
The Authentication tab under Administration > Access is displayed.
Note: You can also enable Kerberos by going directly to Authentication under
Administration > Access in System Manager.
3. Under Active Directory/Kerberos Authentication, click Configure.
The Active Directory/Kerberos Authentication dialog box is displayed.
Choose the type of Kerberos Key Distribution Center (KDC) you want to use:
l Disabled
Note: If you select Disabled, NFS clients do not use Kerberos authentication. CIFS
clients use Workgroup authentication.
l Windows/Active Directory
Note: Enter the Realm Name, Under Name, and Password for Active Directory
authentication.
l Unix
a. Enter the Realm Name, the IP Address/Host Names of one to three KDC servers.
b. Upload the keytab file from one of the KDC servers.
Disabling DD Boost
Disabling DD Boost drops all active connections to the backup server. When you disable or destroy
DD Boost, the DD Boost FC service is also disabled.
Before you begin
Ensure there are no jobs running from your backup application before disabling.
About this task
Note: File replication started by DD Boost between two Data Domain restores is not canceled.
Procedure
1. Select Protocols > DD Boost.
2. Click Disable in the DD Boost Status area.
3. Click OK in the Disable DD Boost confirmation dialog box.
Item Description
Last 24 hr Pre-Comp The amount of raw data from the backup application that has
been written in the last 24 hours.
Last 24 hr Post-Comp The amount of storage used after compression in the last 24
hours.
Last 24 hr Comp Ratio The compression ratio for the last 24 hours.
Weekly Avg Post-Comp The average amount of compressed storage used in the last
five weeks.
Last Week Post-Comp The average amount of compressed storage used in the last
seven days.
Weekly Avg Comp Ratio The average compression ratio for the last five weeks.
Last Week Comp Ratio The average compression ratio for the last seven days.
Note: The Data Movement tab is available only if an optional Data Domain Extended
Retention (formerly DD Archiver) or Data Domain Cloud Tier (DD Cloud Tier) license is
installed.
l Takes you to Replication > On-Demand > File Replication when you click the View DD Boost
Replications link.
Note: A DD Replicator license is required for DD Boost to display tabs other than the File
Replication tab.
l tilde (~)
l apostrophe (unslanted single quotation mark)
l single slanted quotation mark (')
l minus sign (-)
l underscore (_)
4. To select an existing username that will have access to this storage unit, select the user
name in the dropdown list.
If possible, select a username with management role privileges set to none.
5. To create and select a new username that will have access to this storage unit, select
Create a new Local User and:
a. Enter the new user name in the User box.
The user must be configured in the backup application to connect to the Data Domain
system.
Note: When setting both soft and hard limits, a quota’s soft limit cannot exceed the
quota’s hard limit.
7. Click Create.
8. Repeat the above steps for each Data Domain Boost-enabled system.
Total Files The total number of file images on the storage unit. For
compression details that you can download to a log file, click
the Download Compression Details link. The generation can
l The Quota panel shows quota information for the selected storage unit.
Table 130 Quota panel
Quota Enforcement Enabled or disable. Clicking Quota takes you to the Data
Management > Quota tab where you can configure quotas.
Pre-Comp Soft Limit Current value of soft quota set for the storage unit.
Pre-Comp Hard Limit Current value of hard quota set for the storage unit.
To modify the pre-comp soft and hard limits shown in the tab:
1. Click the Quota link in the Quota panel.
2. In the Configure Quota dialog box, enter values for hard and soft quotas and select the unit
of measurement: MiB, GiB, TiB, or PiB. Click OK.
l Snapshots
The Snapshots panel shows information about the storage unit’s snapshots.
Item Description
Total Snapshots The total number of snapshots created for this MTree. A total
of 750 snapshots can be created for each MTree.
Expired The number of snapshots in this MTree that have been marked
for deletion, but have not been removed with the clean
operation as yet.
Unexpired The number of snapshots in this MTree that are marked for
keeping.
Oldest Snapshot The date of the oldest snapshot for this MTree.
Newest Snapshot The date of the newest snapshot for this MTree.
Assigned Snapshot The name of the snapshot schedule assigned to this MTree.
Schedules
n Create a new schedule: Click Assign Snapshot Schedules > Create Snapshot Schedule.
Enter the new schedule’s name.
Note: The snapshot name can be composed only of letters, numbers, _, -, %d (numeric
day of the month: 01-31), %a (abbreviated weekday name), %m (numeric month of the
year: 01-12), %b (abbreviated month name), %y (year, two digits), %Y (year, four digits),
%H (hour: 00-23), and %M (minute: 00-59), following the pattern shown in the dialog
box. Enter the new pattern and click Validate Pattern & Update Sample. Click Next.
– Select when the schedule is to be executed: weekly, every day (or selected days),
monthly on specific days that you select by clicking that date in the calendar, or on
the last day of the month. Click Next.
– Enter the times of the day when the schedule is to be executed: Either select At
Specific Times or In Intervals. If you select a specific time, select the time from the
list. Click Add (+) to add a time (24-hour format). For intervals, select In Intervals
and set the start and end times and how often (Every), such as every eight hours.
Click Next.
– Enter the retention period for the snapshots in days, months, or years. Click Next.
– Review the Summary of your configuration. Click Back to edit any of the values.
Click Finish to create the schedule.
n Click the Snapshots link to go to the Data Management > Snapshots tab.
Space Usage tab
The Space Usage tab graph displays a visual representation of data usage for the storage unit over
time.
l Click a point on a graph line to display a box with data at that point.
l Click Print (at the bottom on the graph) to open the standard Print dialog box.
l Click Show in new window to display the graph in a new browser window.
There are two types of graph data displayed: Logical Space Used (Pre-Compression) and Physical
Capacity Used (Post-Compression).
Daily Written tab
The Daily Written view contains a graph that displays a visual representation of data that is written
daily to the system over a period of time, selectable from 7 to 120 days. The data amounts are
shown over time for pre- and post-compression amounts.
Data Movement tab
A graph in the same format as the Daily Written graph that shows the amount of disk space moved
to the DD Extended Retention storage area (if the DD Extended Retention license is enabled).
4. To rename the storage unit, edit the text in the Name field.
5. To select a different existing user, select the user name in the drop-down list.
If possible, select a username with management role privileges set to none.
6. To create and select a new user, select Create a new Local User and do the following:
a. Enter the new user name in the User box.
The user must be configured in the backup application to connect to the Data Domain
system.
Note: When setting both soft and hard limits, a quota’s soft limit cannot exceed the
quota’s hard limit.
8. Click Modify.
Procedure
1. Select Protocols > DD Boost > Storage Units > More Tasks > Undelete Storage Unit....
2. In the Undelete Storage Units dialog box, select the storage unit(s) that you want to
undelete.
3. Click OK.
The Data Domain system compares the global authentication mode and encryption strength
against the per-client authentication mode and encryption strength to calculate the
effective authentication mode and authentication encryption strength. The system does not
use the highest authentication mode from one entry, and the highest encryption settings
from a different entry. The effective authentication mode and encryption strength come
from the single entry that provides the highest authentication mode.
6. Click OK.
Note: You can also manage distributed segment processing via the ddboost option
commands, which are described in detail in the Data Domain Operating System Command
Reference Guide.
Virtual synthetics
A virtual synthetic full backup is the combination of the last full (synthetic or full) backup and all
subsequent incremental backups. Virtual synthetics are enabled by default.
Low-bandwidth optimization
If you use file replication over a low-bandwidth network (WAN), you can increase replication speed
by using low bandwidth optimization. This feature provides additional compression during data
transfer. Low bandwidth compression is available to Data Domain systems with an installed
Replication license.
Low-bandwidth optimization, which is disabled by default, is designed for use on networks with
less than 6 Mbps aggregate bandwidth. Do not use this option if maximum file system write
performance is required.
Note: You can also manage low bandwidth optimization via the ddboost file-
replication commands, which are described in detail in the Data Domain Operating System
Command Reference Guide.
4. Select Protocols > DD Boost > More Tasks > Manage Certificates....
Note: If you try to remotely manage certificates on a managed system, DD System
Manager displays an information message at the top of the certificate management
dialog. To manage certificates for a system, you must start DD System Manager on that
system.
a. Select I want to upload the public key as a .pem file and use a generated private key.
b. Click Browse and select the host certificate file to upload to the system.
c. Click Add.
4. Select Protocols > DD Boost > More Tasks > Manage Certificates....
Note: If you try to remotely manage certificates on a managed system, DD System
Manager displays an information message at the top of the certificate management
dialog. To manage certificates for a system, you must start DD System Manager on that
system.
Note: DD Boost offers global authentication and encryption options to defend your system
against man-in-the-middle (MITM) attacks. You specify authentication and encryption settings
using the GUI, or CLI commands on the Data Domain system. For details, see the Data Domain
Boost for OpenStorage 3.4 Administration Guide, and Adding a DD Boost client on page 325 or
the Data Domain 6.1 Command Reference Guide.
6. Click OK.
7. Click OK.
Interfaces
IFGROUP supports physical and virtual interfaces.
An IFGROUP interface is a member of a single IFGROUP <group-name> and may consist of:
l Physical interface such as eth0a
l Virtual interface, created for link failover or link aggregation, such as veth1
l Virtual alias interface such as eth0a:2 or veth1:2
l Virtual VLAN interface such as eth0a.1 or veth1.1
l Within an IFGROUP <group-name>, all interfaces must be on unique interfaces (Ethernet,
virtual Ethernet) to ensure failover in the event of network error.
IFGROUP provides full support for static IPv6 addresses, providing the same capabilities for IPv6
as for IPv4. Concurrent IPv4 and IPv6 client connections are allowed. A client connected with IPv6
sees IPv6 IFGROUP interfaces only. A client connected with IPv4 sees IPv4 IFGROUP interfaces
only. Individual IFGROUPs include all IPv4 addresses or all IPv6 addresses.
For more information, see the Data Domain Boost for Partner Integration Administration Guide or the
Data Domain Boost for OpenStorage Administration Guide.
Interface enforcement
IFGROUP lets you enforce private network connectivity, ensuring that a failed job does not
reconnect on the public network after network errors.
When interface enforcement is enabled, a failed job can only retry on an alternative private
network IP address. Interface enforcement is only available for clients that use IFGROUP
interfaces.
Interface enforcement is off (FALSE) by default. To enable interface enforcement, you must add
the following setting to the system registry:
system.ENFORCE_IFGROUP_RW=TRUE
After you've made this entry in the registry, you must do a filesys restart for the setting to
take effect.
For more information, see the Data Domain Boost for Partner Integration Administration Guide or the
Data Domain Boost for OpenStorage Administration Guide.
Clients
IFGROUP supports various naming formats for clients. Client selection is based on a specified
order of precedence.
An IFGROUP client is a member of a single ifgroup <group-name> and may consist of:
l A fully qualified domain name (FQDN) such as ddboost.datadomain.com
l A partial host, allowing search on the first n characters of the hostname. For example, when
n=3, valid formats are rtp_.*emc.com and dur_.*emc.com. Five different values of n (1-5)
are supported.
l Wild cards such as *.datadomain.com or “*”
l A short name for the client, such as ddboost
l Client public IP range, such as 128.5.20.0/24
Prior to write or read processing, the client requests an IFGROUP IP address from the server. To
select the client IFGROUP association, the client information is evaluated according to the
following order of precedence.
1. IP address of the connected Data Domain system. If there is already an active connection
between the client and the Data Domain system, and the connection exists on the interface in
the IFGROUP, then the IFGROUP interfaces are made available for the client.
2. Connected client IP range. An IP mask check is done against the client source IP; if the client's
source IP address matches the mask in the IFGROUP clients list, then the IFGROUP interfaces
are made available for the client.
l For IPv4, you can select five different range masks, based on network.
l For IPv6, fixed masks /64, /112, and /128 are available.
This host-range check is useful for separate VLANs with many clients where there isn't a
unique partial hostname (domain).
3. Client Name: abc-11.d1.com
4. Client Domain Name: *.d1.com
5. All Clients: *
For more information, see the Data Domain Boost for Partner Integration Administration Guide.
5. Click OK.
6. In the Configured Clients section, click Add (+).
7. Enter a fully qualified client name or *.mydomain.com.
Note: The * client is initially available to the default group. The * client may only be a
member of one ifgroup.
6. Click OK.
6. Click OK.
Destroying DD Boost
Use this option to permanently remove all of the data (images) contained in the storage units.
When you disable or destroy DD Boost, the DD Boost FC service is also disabled. Only an
administrative user can destroy DD Boost.
Procedure
1. Manually remove (expire) the corresponding backup application catalog entries.
Note: If multiple backup applications are using the same Data Domain system, then
remove all entries from each of those applications’ catalogs.
2. Select Protocols > DD Boost > More Tasks > Destroy DD Boost....
3. Enter your administrative credentials when prompted.
4. Click OK.
Note: If you are using DD System Manager, the SCSI target daemon is automatically
enabled when you enable the DD Boost-over-FC service (later in this procedure).
l Verify that the DD Boost license is installed. In DD System Manager, select Protocols > DD
Boost > Settings. If the Status indicates that DD Boost is not licensed, click Add License and
enter a valid license in the Add License Key dialog box.
CLI equivalents
# license show
# license add license-code
Procedure
1. Select Protocols > DD Boost > Settings.
2. In the Users with DD Boost Access section, specify one or more DD Boost user names.
A DD Boost user is also a DD OS user. When specifying a DD Boost user name, you can
select an existing DD OS user name, or you can create a new DD OS user name and make
that name a DD Boost user. This release supports multiple DD Boost users. For detailed
instructions, see “Specifying DD Boost User Names.”
CLI equivalents
# ddboost enable
Starting DDBOOST, please wait...............
DDBOOST is enabled.
Results
You are now ready to configure the DD Boost-over-FC service on the Data Domain system.
Configuring DD Boost
After you have added user(s) and enabled DD Boost, you need to enable the Fibre Channel option
and specify the DD Boost Fibre Channel server name. Depending on your application, you may also
need to create one or more storage units and install the DD Boost API/plug-in on media servers
that will access the Data Domain system.
Procedure
1. Select Protocols > DD Boost > Fibre Channel.
2. Click Enable to enable Fibre Channel transport.
CLI equivalent
3. To change the DD Boost Fibre Channel server name from the default (hostname), click Edit,
enter a new server name, and click OK.
CLI equivalent
4. Select Protocols > DD Boost > Storage Units to create a storage unit (if not already
created by the application).
You must create at least one storage unit on the Data Domain system, and a DD Boost user
must be assigned to that storage unit. For detailed instructions, see “Creating a Storage
Unit.”
CLI equivalent
Results
You are now ready to verify connectivity and create access groups.
CLI equivalent
# scsitarget initiator show list
Initiator System Address Group Service
------------ ----------------------- ---------- -------
initiator-1 21:00:00:24:ff:31:b7:16 n/a n/a
initiator-2 21:00:00:24:ff:31:b8:32 n/a n/a
initiator-3 25:00:00:21:88:00:73:ee n/a n/a
initiator-4 50:06:01:6d:3c:e0:68:14 n/a n/a
initiator-5 50:06:01:6a:46:e0:55:9a n/a n/a
initiator-6 21:00:00:24:ff:31:b7:17 n/a n/a
initiator-7 21:00:00:24:ff:31:b8:33 n/a n/a
initiator-8 25:10:00:21:88:00:73:ee n/a n/a
initiator-9 50:06:01:6c:3c:e0:68:14 n/a n/a
initiator-10 50:06:01:6b:46:e0:55:9a n/a n/a
tsm6_p23 21:00:00:24:ff:31:ce:f8 SetUp_Test VTL
------------ ----------------------- ---------- -------
2. To assign an alias to an initiator, select one of the initiators and click the pencil (edit) icon.
In the Name field of the Modify Initiator dialog, enter the alias and click OK.
CLI equivalents
# scsitarget initiator rename initiator-1 initiator-renamed
Initiator 'initiator-1' successfully renamed.
# scsitarget initiator show list
Initiator System Address Group Service
----------------- ----------------------- ---------- -------
initiator-2 21:00:00:24:ff:31:b8:32 n/a n/a
3. On the Resources tab, verify that endpoints are present and enabled.
CLI equivalent
# scsitarget endpoint show list
------------- -------------- ------------ ------- ------
endpoint-fc-0 5a FibreChannel Yes Online
endpoint-fc-1 5b FibreChannel Yes Online
------------- -------------- ------------ ------- ------
7. Select one or more initiators. Optionally, replace the initiator name by entering a new one.
Click Next.
CLI equivalent
#ddboost fc group add test-dfc-group initiator initiator-5
Initiator(s) "initiator-5" added to group "test-dfc-group".
An initiator is a port on an HBA attached to a backup client that connects to the system for
the purpose of reading and writing data using the Fibre Channel protocol. The WWPN is the
unique World-Wide Port Name of the Fibre Channel port in the media server.
8. Specify the number of DD Boost devices to be used by the group. This number determines
which devices the initiator can discover and, therefore, the number of I/O paths to the Data
Domain system. The default is one, the minimum is one, and the maximum is 64.
CLI equivalent
# ddboost fc group modify Test device-set count 5
Added 3 devices.
See the Data Domain Boost for OpenStorage Administration Guide for the recommended value
for different clients.
9. Indicate which endpoints to include in the group: all, none, or select from the list of
endpoints. Click Next.
CLI equivalents
# scsitarget group add Test device ddboost-dev8 primary-endpoint all
secondary-endpoint all
Device 'ddboost-dev8' successfully added to group.
# scsitarget group add Test device ddboost-dev8 primary-endpoint
endpoint-fc-1 secondary-endpoint fc-port-0
Device 'ddboost-dev8' is already in group 'Test'.
When presenting LUNs via attached FC ports on HBAs, ports can be designated as primary,
secondary or none. A primary port for a set of LUNs is the port that is currently advertizing
those LUNs to a fabric. A secondary port is a port that will broadcast a set of LUNs in the
event of primary path failure (this requires manual intervention). A setting of none is used in
the case where you do not wish to advertize selected LUNs. The presentation of LUNs is
dependent upon the SAN topology.
10. Review the Summary and make any modifications. Click Finish to create the access group,
which is displayed in the DD Boost Access Groups list.
CLI equivalent
# scsitarget group show detailed
Note: To change settings for an existing access group, select it from the list and click
the pencil icon (Modify).
Settings
Use the Settings tab to enable or disable DD Boost, select clients and users, and specify advanced
options.
The Settings tab shows the DD Boost status (Enabled or Disabled). Use the Status button to
switch between Enabled or Disabled.
Under Allowed Clients, select the clients that are to have access to the system. Use the Add,
Modify, and Delete buttons to manage the list of clients.
Under Users with DD Boost Access, select the users that are to have DD Boost access. Use the
Add, Change Password, and Remove buttons to manage the list of users.
Expand Advanced Options to see which advanced options are enabled. Go to More Tasks > Set
Options to reset these options.
Active Connections
Use the Active Connections tab to see information about clients, interfaces, and outbound files.
Item Description
Item Description
Item Description
IP Network
The IP Network tab lists configured interface groups. Details include whether or not a group is
enabled and any configured client interfaces. Administrators can use the Interface Group menu to
view which clients are associated with an interface group.
Fibre Channel
The Fibre Channel tab lists configured DD Boost access groups. Use the Fibre Channel tab to
create and delete access groups and to configure initiators, devices, and endpoints for DD Boost
access groups.
Storage Units
Use the Storage Units tab to view, create, modify, and delete storage units.
Item Description
Storage Units
Quota Hard Limit The hard quota set for the storage unit.
Last 24hr Pre-Comp The amount of data written to the storage unit in the last 24
hours, before compression.
Last 24hr Post-Comp The amount of data written to the storage unit in the last 24
hours, after compression.
Last 24hr Comp Ratio Compression ratio of the data written to the storage unit in
the last 24 hours.
Item Description
Weekly Avg Post-Comp Average amount of data written to the storage unit each
week, after compression.
Last Week Post-Comp Amount of data written to the storage unit in the last week,
after compression.
Weekly Avg Comp Ratio Average compression ratio of data written to the storage
unit each week.
Last Week Comp Ratio Compression ratio of the data written to the storage unit in
the last week.
Select a storage unit to see detailed information about it. Detailed information is available on three
tabs:
l Storage Unit tab
Table 136 Storage unit details: Storage Unit tab
Item Description
Total Files The total number of file images on the storage unit.
Used (Post-Comp) The total size after compression of the files in the storage
unit.
Submitted Measurements The number of times the physical capacity of the storage
unit has been measured.
Pre-Comp Soft Limit Current value of soft quota set for the storage unit.
Pre-Comp Hard Limit Current value of hard quota set for the storage unit.
Item Description
Assigned Snapshot Schedules The snapshot schedules assigned to the storage unit.
l Space Usage tab: Displays a graph showing pre-compression bytes used, post-compression
bytes used, and compression factor.
l Daily Written tab: Displays a graph showing pre-compression bytes written, post-compression
bytes written, and total compression factor.
Planning a DD VTL
The DD VTL (Virtual Tape Library) feature has very specific requirements, such as proper
licensing, interface cards, user permissions, etc. These requirements are listed here, complete with
details and recommendations.
l An appropriate DD VTL license.
n DD VTL is a licensed feature, and you must use NDMP (Network Data Management
Protocol) over IP (Internet Protocol) or DD VTL directly over FC (Fibre Channel).
n An additional license is required for IBM i systems – the I/OS license.
n Adding a DD VTL license through the DD System Manager automatically disables and
enables the DD VTL feature.
DD VTL limits
Before setting up or using a DD VTL, review these limits on size, slots, etc.
l I/O Size – The maximum supported I/O size for any DD system using DD VTL is 1 MB.
l Libraries – DD VTL supports a maximum of 64 libraries per DD system (that is, 64 DD VTL
instances on each DD system).
l Initiators – DD VTL supports a maximum of 1024 initiators or WWPNs (world-wide port names)
per DD system.
l Tape Drives – Information about tape drives is presented in the next section.
l Data Streams – Information about data streams is presented in the following table.
Table 137 Data streams sent to a Data Domain system
+w<=90; w+r+ReplSrc
<=90;Total<=90
Number of CPU RAM (in GB) NVRAM (in Maximum number of supported
cores GB) drives
Number of CPU RAM (in GB) NVRAM (in Maximum number of supported
cores GB) drives
40 to 59 NA NA 540
60 or more NA NA 1080
Tape barcodes
When you create a tape, you must assign a unique barcode (never duplicate barcodes as this can
cause unpredictable behavior). Each barcode consists of eight characters: the first six are
numbers or uppercase letters (0-9, A-Z), and the last two are the tape code for the supported
tape type, as shown in the following table.
Note: Although a DD VTL barcode consists of eight characters, either six or eight characters
may be transmitted to a backup application, depending on the changer type.
For multiple tape libraries, barcodes are automatically incremented, if the sixth character (just
before the "L") is a number. If an overflow occurs (9 to 0), numbering moves one position to the
left. If the next character to increment is a letter, incrementation stops. Here are a few sample
barcodes and how each will be incremented:
l 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to 100,000 tapes
(from 000000 to 99999).
l AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to 10,000 tapes
(from 0000 to 9999).
l AAAA00LB creates tapes of 30GiB capacity and can accept a count of up to 100 tapes (from
00 to 99).
l AAAAAALC creates one tape of 10 GiB capacity. Only one tape can be created with this name.
l AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650 tapes (from
350 to 999).
l 000AAALA creates one tape of 50 GiB capacity. Only one tape can be created with this name.
l 5M7Q3KLB creates one tape of 30 GiB capacity. Only one tape can be created with this name.
tape format LTO-5 drive LTO-4 drive LTO-3 drive LTO-2 drive LTO-1 drive
LTO-5 tape RW — — — —
LTO-4 tape RW RW — — —
LTO-3 tape R RW RW — —
LTO-2 tape — R RW RW —
LTO-1 tape — — R RW RW
Setting up a DD VTL
To set up a simple DD VTL, use the Configuration Wizard, which is described in the Getting
Started chapter.
Similar documentation is available in the Data Domain Operating System Initial Configuration Guide.
Then, continue with the following topics to enable the DD VTL, create libraries, and create and
import tapes.
Note: If the deployment environment includes an AS400 system as a DD VTL client, refer to
Configuring DD VTL default options on page 354 to configure the serial number prefix for VTL
changers and drives before configuring the DD VTL relationship between the Data Domain
system and the AS400 client system.
Managing a DD VTL
You can manage a DD VTL using the Data Domain System Manager (DD System Manager) or the
Data Domain Operating System (DD OS) Command Line Interface (CLI). After you login, you can
check the status of your DD VTL process, check your license information, and review and
configure options.
Logging In
To use a graphical user interface (GUI) to manage your DD Virtual Tape Library (DD VTL), log in to
the DD System Manager.
CLI Equivalent
You can also log in at the CLI:
login as: sysadmin
Data Domain OS
Using keyboard-interactive authentication.
Password:
Accessing DD VTL
From the menu at the left of the DD System Manager, select Protocols > VTL.
Status
In the Virtual Tape Libraries > VTL Service area, you can see the status of your DD VTL process
is displayed at the top, for example, Enabled: Running. The first part of the status will be Enabled
(on) or Disabled (off). The second part will be one of the following process states.
State Description
State Description
DD VTL License
The VTL License line tells you whether your DD VTL license has been applied. If it says Unlicensed,
select Add License. Enter your license key in the Add License Key dialog. Select Next and OK.
Note: All license information should have been populated as part of the factory configuration
process; however, if DD VTL was purchased later, the DD VTL license key may not have been
available at that time.
CLI Equivalent
You can also verify that the DD VTL license has been installed at the CLI:
# elicense show
## License Key Feature
-- ------------------- -----------
1 DEFA-EFCD-FCDE-CDEF Replication
2 EFCD-FCDE-CDEF-DEFA VTL
-- ------------------- -----------
If the license is not present, each unit comes with documentation – a quick install card – which will
show the licenses that have been purchased. Enter one of the following commands to populate the
license key.
# license add <license-code>
# elicense update <license-file>
Enabling DD VTL
Enabling DD VTL broadcasts the WWN of the Data Domain HBA to customer fabric and enables all
libraries and library drives. If a forwarding plan is required in the form of change control processes,
this process should be enabled to facilitate zoning.
Procedure
1. Make sure that you have a DD VTL license and that the file system is enabled.
2. Select Virtual Tape Libraries > VTL Service.
3. To the right of the Status area, select Enable.
CLI Equivalent
# vtl enable
Starting VTL, please wait ...
VTL is enabled.
Disabling DD VTL
Disabling DD VTL closes all libraries and shuts down the DD VTL process.
Procedure
1. Select Virtual Tape Libraries > VTL Service.
2. To the right of the Status area, select Disable.
3. In the Disable Service dialog, select OK.
4. After DD VTL has been disabled, notice that the Status has changed to Disabled: Stopped
in red.
CLI Equivalent
# vtl disable
Item Description
3. Select OK.
4. Or to disable all of these service options, select Reset to Factory, and the values will be
immediately reset to factory defaults.
After you finish
If the DD VTL environment contains an AS400 as a DD VTL client, configure the DD VTL option for
serial-number-prefix manually before adding the AS400 to the DD VTL environment. This is
required to avoid duplicate serial numbers when there are multiple Data Domain systems using DD
VTL. The serial-number-prefix value must:
l Be a unique six digit value such that no other DD VTL on any Data Domain system in the
environment has the same prefix number
l Not end with a zero
Configure this value only once during the deployment of the Data Domain system and the
configuration of DD VTL. It will persist with any future DD OS upgrades on the system. Setting this
value does not require a DD VTL service restart. Any DD VTL library created after setting this value
will use the new prefix for the serial number.
CLI equivalent
# vtl option set serial-number-prefix value
# vtl option show serial-number-prefix
Item Description
From the More Tasks menu, you can create and delete libraries, as well as search for tapes.
Creating libraries
DD VTL supports a maximum of 64 libraries per system, that is, 64 concurrently active virtual tape
library instances on each DD system.
Before you begin
If the deployment environment includes an AS400 system as a DD VTL client, refer to Configuring
DD VTL default options on page 354 to configure the serial number prefix for VTL changers and
drives before creating the DD VTL library and configuring the DD VTL relationship between the
Data Domain system and the AS400 client system.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries.
2. Select More Tasks > Library > Create
3. In the Create Library dialog, enter the following information:
Number of Drives Enter the number of drives (from 1 to 98 (see Note). The
number of drives to be created will correspond to the number of
data streams that will write to a library.
Note: The maximum number of drives supported by a DD
VTL depends on the number of CPU cores and the amount
of memory installed (both RAM and NVRAM, if applicable)
on a DD system.
Drive Model Select the desired model from the drop-down list:
l IBM-LTO-1
l IBM-LTO-2
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
Do not mix drive types, or media types, in the same library. This
can cause unexpected results and/or errors in the backup
operation.
Number of Slots Enter the number of slots in the library. Here are some things to
consider:
l The number of slots must be equal to or greater than the
number of drives.
l You can have up to 32,000 slots per individual library
l You can have up to 64,000 slots per system.
l Try to have enough slots so tapes remain in the DD VTL and
never need to be exported to a vault – to avoid reconfiguring
the DD VTL and to ease management overhead.
l Consider any applications that are licensed by the number of
slots.
As an example, for a standard 100-GB cartridge on a DD580,
you might configure 5000 slots. This would be enough to hold
up tp 500 TB (assuming reasonably compressible data).
Number of CAPs (Optional) Enter the number of cartridge access ports (CAPs).
l You can have up to 100 CAPs per library.
l You can have up to 1000 CAPs per system.
Check your particular backup software application
documentation on the Online Support Site for guidance.
Changer Model Name Select the desired model from the drop-down list:
l L180 (default)
l RESTORER-L180
l TS3500
l I2000
l I6000
l DDVTL
Check your particular backup software application
documentation on the Online Support Site for guidance. Also
refer to the DD VTL support matrix to see the compatibility of
emulated libraries to supported software.
Options
4. Select OK.
After the Create Library status dialog shows Completed, select OK.
The new library appears under the Libraries icon in the VTL Service tree, and the options
you have configured appear as icons under the library. Selecting the library displays details
about the library in the Information Panel.
Note that access to VTLs and drives is managed with Access Groups.
CLI Equivalent
# vtl add NewVTL model L180 slots 50 caps 5
This adds the VTL library, NewVTL. Use 'vtl show config NewVTL' to view
it.
Deleting libraries
When a tape is in a drive within a library, and that library is deleted, the tape is moved to the vault.
However, the tape's pool does not change.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries.
2. Select More Tasks > Library > Delete.
3. In the Delete Libraries dialog, select or confirm the checkbox of the items to delete:
l The name of each library, or
l Library Names, to delete all libraries
4. Select Next.
5. Verify the libraries to delete, and select Submit in the confirmation dialogs.
6. After the Delete Libraries Status dialog shows Completed, select Close. The selected
libraries are deleted from the DD VTL.
CLI Equivalent
# vtl del OldVTL
Pool Select the name of the pool in which to search for the tape. If no pools have
been created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character
and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
5. Select Search.
Item Description
Device The elements in the library, such a drives, slots, and CAPs
(cartridge access ports).
Property Value
barcode-length 6 or 8
Item Description
Pool The name of the pool where the tapes are located.
Tape Count The number of tapes in that pool.
Capacity The total configured data capacity of the tapes in that pool, in
GiB (Gibibytes, the base-2 equivalent of GB, Gigabytes).
Used The amount of space used on the virtual tapes in that pool.
From the More Tasks menu, you can delete, rename, or set options for a library; create, delete,
import, export, or move tapes; and add or delete slots and CAPs.
Creating tapes
You can create tapes in either a library or a pool. If initiated from a pool, the system first creates
the tapes, then imports them to the library.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library or Vault or Pools > Pools
> pool.
2. Select More Tasks > Tapes > Create.
3. In the Create Tapes dialog, enter the following information about the tape:
Library (if initiated If a drop-down menu is enabled, select the library or leave the default
from a library) selection.
Pool Name Select the name of the pool in which the tape will reside, from the drop-
down list. If no pools have been created, use the Default pool.
Number of Tapes For a library, select from 1 to 20. For a pool, select from 1 to 100,000, or
leave the default (20). [Although the number of supported tapes is
unlimited, you can create no more than 100,000 tapes at a time.]
Starting Barcode Enter the initial barcode number (using the format A99000LA).
Tape Capacity (optional) Specify the number of GiBs from 1 to 4000 for each tape (this
setting overrides the barcode capacity setting). For efficient use of disk
space, use 100 GiB or fewer.
CLI Equivalent
Deleting tapes
You can delete tapes from either a library or a pool. If initiated from a library, the system first
exports the tapes, then deletes them. The tapes must be in the vault, not in a library. On a
Replication destination DD system, deleting a tape is not permitted.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library or Vault or Pools > Pools
> pool.
2. Select More Tasks > Tapes > Delete.
3. In the Delete Tapes dialog, enter search information about the tapes to delete, and select
Search:
Location If there is a drop-down list, select a library, or leave the default Vault selection.
Pool Select the name of the pool in which to search for the tape. If no pools have
been created, use the Default pool.
Barcode Specify a unique barcode, or leave the default (*) to search for a group of
tapes. Barcode allows the wildcards ? and *, where ? matches any single
character and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page – possible values are
Page 15, 30, and 45.
Select all Select the Select All Pages checkbox to select all tapes returned by the search
pages query.
Items Shows the number of tapes selected across multiple pages – updated
Selected automatically for each tape selection.
4. Select the checkbox of the tape that should be deleted or the checkbox on the heading
column to delete all tapes, and select Next.
5. Select Submit in the confirmation window, and select Close.
Note: After a tape is removed, the physical disk space used for the tape is not reclaimed
until after a file system cleaning operation.
CLI Equivalent
For example:
Note: You can act on ranges; however, if there is a missing tape in the range, the action
will stop.
Importing tapes
Importing a tape means that an existing tape will be moved from the vault to a library slot, drive, or
cartridge access port (CAP).
About this task
The number of tapes you can import at one time is limited by the number of empty slots in the
library, that is, you cannot import more tapes than the number of currently empty slots.
To view the available slots for a library, select the library from the stack menu. The information
panel for the library shows the count in the Empty column.
l If a tape is in a drive, and the tape origin is known to be a slot, a slot is reserved.
l If a tape is in a drive, and the tape origin is unknown (slot or CAP), a slot is reserved.
l If a tape is in a drive, and the tape origin is known to be a CAP, a slot is not reserved. (The tape
returns to the CAP when removed from the drive.)
l To move a tape to a drive, see the section on moving tapes, which follows.
Procedure
1. You can import tapes using either step a. or step b.
a. Select Virtual Tape Libraries > VTL Service > Libraries > library. Then, select More
Tasks > Tapes > Import. In the Import Tapes dialog, enter search information about the
tapes to import, and select Search:
Location If there is a drop-down list, select the location of the tape, or leave the default of
Vault.
Pool Select the name of the pool in which to search for the tape. If no pools have been
created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character and *
matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you leave
this blank, the barcode default (*) is used.
Select Select the destination device where the tape will be imported. Possible values are
Destination > Drive, CAP, and Slot.
Device
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15,
Page 30, and 45.
Items Shows the number of tapes selected across multiple pages – updated
Selected automatically for each tape selection.
Based on the previous conditions, a default set of tapes is searched to select the tapes
to import. If pool, barcode, or count is changed, select Search to update the set of tapes
available from which to choose.
b. Select Virtual Tape Libraries > VTL Service > Libraries> library > Changer > Drives >
drive > Tapes. Select tapes to import by selecting the checkbox next to:
l An individual tape, or
l The Barcode column to select all tapes on the current page, or
l The Select all pages checkbox to select all tapes returned by the search query.
Only tapes showing Vault in the Location can be imported.
Select Import from Vault. This button is disabled by default and enabled only if all of the
selected tapes are from the Vault.
2. From the Import Tapes: library view, verify the summary information and the tape list, and
select OK.
3. Select Close in the status window.
CLI Equivalent
Exporting tapes
Exporting a tape removes that tape from a slot, drive, or cartridge-access port (CAP) and sends it
to the vault.
Procedure
1. You can export tapes using either step a. or step b.
a. Select Virtual Tape Libraries > VTL Service > Libraries > library. Then, select More
Tasks > Tapes > Export. In the Export Tapes dialog, enter search information about the
tapes to export, and select Search:
Location If there is a drop-down list, select the name of the library where the tape is located,
or leave the selected library.
Pool Select the name of the pool in which to search for the tape. If no pools have been
created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of tapes. Barcode
allows the wildcards ? and *, where ? matches any single character and * matches 0
or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you leave this
blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15, 30,
Page and 45.
Select all Select the Select All Pages checkbox to select all tapes returned by the search
pages query.
Items Shows the number of tapes selected across multiple pages – updated automatically
Selected for each tape selection.
b. Select Virtual Tape Libraries > VTL Service > Libraries> library > Changer > Drives >
drive > Tapes. Select tapes to export by selecting the checkbox next to:
l An individual tape, or
l The Barcode column to select all tapes on the current page, or
l The Select all pages checkbox to select all tapes returned by the search query.
Only tapes with a library name in the Location column can be exported.
Select Export from Library. This button is disabled by default and enabled only if all of
the selected tapes have a library name in the Location column.
2. From the Export Tapes: library view, verify the summary information and the tape list, and
select OK.
3. Select Close in the status window.
CLI Equivalent
3. In the Move Tape dialog, enter search information about the tapes to move, and select
Search:
Count Enter the maximum number of tapes you want to be returned to you. If you leave
this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15,
Page 30, and 45.
Items Shows the number of tapes selected across multiple pages – updated
Selected automatically for each tape selection.
4. From the search results list, select the tape or tapes to move.
5. Do one of the following:
a. Select the device from the Device list (for example, a slot, drive, or CAP), and enter a
starting address using sequential numbers for the second and subsequent tapes. For
each tape to be moved, if the specified address is occupied, the next available address is
used.
b. Leave the address blank if the tape in a drive originally came from a slot and is to be
returned to that slot; or if the tape is to be moved to the next available slot.
6. Select Next.
7. In the Move Tape dialog, verify the summary information and the tape listing, and select
Submit.
8. Select Close in the status window.
Adding slots
You can add slots from a configured library to change the number of storage elements.
About this task
Note: Some backup applications do not automatically recognize that slots have been added to
a DD VTL. See your application documentation for information on how to configure the
application to recognize this type of change.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library.
2. Select More Tasks > Slots > Add.
3. In the Add Slots dialog, enter the Number of Slots to add. The total number of slots in a
library, or in all libraries on a system, cannot exceed 32,000 for a library and 64,000 for a
system.
4. Select OK and Close when the status shows Completed.
Deleting slots
You can delete slots from a configured library to change the number of storage elements.
About this task
Note: Some backup applications do not automatically recognize that slots have been deleted
from a DD VTL. See your application documentation for information on how to configure the
application to recognize this type of change.
Procedure
1. If the slot that you want to delete contains cartridges, move those cartridges to the vault.
The system will delete only empty, uncommitted slots.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library.
3. Select More Tasks > Slots > Delete.
4. In the Delete Slots dialog, enter the Number of Slots to delete.
5. Select OK and Close when the status shows Completed.
Adding CAPs
You can add CAPs (cartridge access ports) from a configured library to change the number of
storage elements.
About this task
Note: CAPs are used by a limited number of backup applications. See your application
documentation to ensure that CAPs are supported.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library.
2. Select More Tasks > CAPs > Add.
3. In the Add CAPs dialog, enter the Number of CAPs to add. You can add from 1 to 100 CAPs
per library and from 1 to 1,000 CAPs per system.
4. Select OK and Close when the status shows Completed.
Deleting CAPs
You can delete CAPs (cartridge access ports) from a configured library to change the number of
storage elements.
About this task
Note: Some backup applications do not automatically recognize that CAPs have been deleted
from a DD VTL. See your application documentation for information on how to configure the
application to recognize this type of change.
Procedure
1. If the CAP that you want to delete contains cartridges, move those cartridges to the vault,
or this will be done automatically.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library.
3. Select More Tasks > CAPs > Delete.
4. In the Delete CAPs dialog, enter the Number of CAPs to delete. You can delete a maximum
of 100 CAPs per library or 1000 CAPs per system.
5. Select OK and Close when the status shows Completed.
Item Description
Column Description
Drive The list of drives by name, where name is “Drive #” and # is a number between 1
and n representing the address or location of the drive in the list of drives.
Status Whether the drive is Empty, Open, Locked, or Loaded. A tape must be present for
the drive to be locked or loaded.
Column Description
Tape and library drivers – To work with drives, you must use the tape and library drivers supplied
by your backup software vendor that support the IBM LTO-1, IBM LTO-2, IBM LTO-3, IBM LTO-4,
IBM LTO-5 (default), HP-LTO-3, or HP-LTO-4 drives and the StorageTek L180 (default),
RESTORER-L180, IBM TS3500, I2000, I6000, or DDVTL libraries. For more information, see the
Application Compatibility Matrices and Integration Guides for your vendors. When configuring drives,
also keep in mind the limits on backup data streams, which are determined by the platform in use.
LTO drive capacities – Because the DD system treats LTO drives as virtual drives, you can set a
maximum capacity to 4 TiB (4000 GiB) for each drive type. The default capacities for each LTO
drive type are as follows:
l LTO-1 drive: 100 GiB
l LTO-2 drive: 200 GiB
l LTO-3 drive: 400 GiB
l LTO-4 drive: 800 GiB
l LTO-5 drive: 1.5 TiB
Migrating LTO-1 tapes – You can migrate tapes from existing LTO-1 type VTLs to VTLs that
include other supported LTO-type tapes and drives. The migration options are different for each
backup application, so follow the instructions in the LTO tape migration guide specific to your
application. To find the appropriate guide, go to the Online Support Site, and in the search text
box, type in LTO Tape Migration for VTLs.
Tape full: Early warning – You will receive a warning when the remaining tape space is almost
completely full, that is, greater than 99.9, but less than 100 percent. The application can continue
writing until the end of the tape to reach 100 percent capacity. The last write, however, is not
recoverable.
From the More Tasks menu, you can create or delete a drive.
Creating drives
See the Number of drives supported by a DD VTL section to determine the maximum number of
drives supported for your particular DD VTL.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library> Changer > Drives.
2. Select More Tasks > Drives > Create.
3. In the Create Drive dialog, enter the following information:
Number of See the table in the Number of Drives Supported by a DD VTL section, earlier
Drives in this chapter.
Model Name Select the model from the drop-down list. If another drive already exists, this
option is inactive, and the existing drive type must be used. You cannot mix
drive types in the same library.
l IBM-LTO-1
l IBM-LTO-2
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
4. Select OK, and when the status shows Completed, select OK.
The added drive appears in the Drives list.
Deleting drives
A drive must be empty before it can be deleted.
Procedure
1. If there is a tape in the drive that you want to delete, remove the tape.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library > Changer > Drives.
3. Select More Tasks > Drives > Delete.
4. In the Delete Drives dialog, select the checkboxes of the drives to delete, or select the Drive
checkbox to delete all drives.
5. Select Next, and after verifying that the correct drive(s) has been selected for deletion,
select Submit.
6. When the Delete Drive Status dialog shows Completed, select Close.
The drive will have been removed from the Drives list.
Column Description
Column Description
Column Description
From the More Tasks menu, you can delete the drive or perform a refresh.
Item Description
Pool The name of the pool that holds the tape. The Default pool
holds all tapes unassigned to a user-created pool.
Item Description
l RL – Retention-locked
l RO – Readable only
l WP – Write-protected
l RD – Replication destination
Locked Until If a DD Retention Lock deadline has been set, the time set is
shown. If no retention lock exists, this value is Not
specified.
From the information panel, you can import a tape from the vault, export a tape to the library, set a
tape's state, create a tape, or delete a tape.
From the More Tasks menu, you can move a tape.
Item Description
Item Description
Cloud provider For systems with tapes in DD Cloud Tier, there is a column for
each cloud provider.
From the More Tasks menu, you can create, delete, and search for tapes in the vault.
l Recall a tape from the cloud tier. Run the vtl tape recall start barcode <barcode>
[count <count>] pool <pool> command.
After the recall, the tape resides in a local DD VTL vault and must be imported to the library for
access.
Note: Run the vtl tape show command at any time to check the current location of a
tape. The tape location updates within one hour of the tape moving to or from the cloud
tier.
6. Click Create.
Note: After creating the data movement policy, the Edit and Clear buttons can be used
to modify or delete the data movement policy.
CLI equivalent
Procedure
1. Set the data movement policy to user-managed or age-threshold
Note: VTL pool and cloud unit names are case sensitive and commands will fail if the
case is not correct.
l To set the data movement policy to user-managed, run the following command:
vtl pool modify cloud-vtl-pool data-movement-policy user-managed
to-tier cloud cloud-unit ecs-unit1
** Any tapes that are already selected will be migrated on the next data-movement run.
VTL data-movement policy is set to "user-managed" for VTL pool "cloud-vtl-pool".
l To set the data movement policy to age-threshold, run the following command:
Note: The minimum is 14 days, and the maximum is 182,250 days.
RO : Read Only
RD : Replication Destination
BCM : Backwards-Compatibility
3. Verify the policy for the VTL pool MTree is app-managed.
Run the following command:
data-movement policy show all
Mtree Target(Tier/Unit Name) Policy Value
------------------------- ---------------------- ----------- -------
/data/col1/cloud-vtl-pool Cloud/ecs-unit1 app-managed enabled
------------------------- ---------------------- ----------- -------
CLI equivalent
Procedure
1. Identify the slot location of the tape volume to move.
Run the following command:
vtl tape show cloud-vtl
Processing tapes....
Barcode Pool Location State Size Used (%) Comp
Modification Time
-------- -------------- ----------------- ----- ----- ---------------- ----
-------------------
T00001L3 cloud-vtl-pool cloud-vtl slot 1 RW 5 GiB 5.0 GiB (99.07%) 205x
2017/05/05 10:43:43
T00002L3 cloud-vtl-pool cloud-vtl slot 2 RW 5 GiB 5.0 GiB (99.07%) 36x
2017/05/05 10:45:10
T00003L3 cloud-vtl-pool cloud-vtl slot 3 RW 5 GiB 5.0 GiB (99.07%) 73x
2017/05/05 10:45:26
2. Specify the numeric slot value to export the tape from the DD VTL.
Run the following command:
vtl export cloud-vtl-pool slot 1 count 1
3. Verify the tape is in the vault.
Run the following command:
vtl tape show vault
4. Select the tape for data movement.
Run the following command:
vtl tape select-for-move barcode T00001L3 count 1 pool cloud-vtl-
pool to-tier cloud
5. View the list of tapes scheduled to move to cloud storage during the next data movement
operation. The tapes selected for movement display an (S) in the location column.
Run the following command:
vtl tape show vault
Processing tapes.....
Barcode Pool Location State Size Used (%) Comp
Modification Time
-------- ----------------- --------- ------ ------ ---------------- ----
-------------------
T00003L3 cloud-vtl-pool vault (S) RW 5 GiB 5.0 GiB (99.07%) 63x
2017/05/05 10:43:43
T00006L3 cloud-vtl-pool ecs-unit1 n/a 5 GiB 5.0 GiB (99.07%) 62x
2017/05/05 10:45:49
-------- ----------------- --------- ------ ------ ---------------- ----
-------------------
* RD : Replication Destination
(S) Tape selected for migration to cloud. Selected tapes will move to cloud on the next
data-movement run.
(R) Recall operation is in progress for the tape.
CLI equivalent
Procedure
1. Identify the volume required to restore data.
2. Recall the tape volume from the vault.
Run the following command:
vtl tape recall start barcode T00001L3 count 1 pool cloud-vtl-pool
3. Verify the recall operation started.
Run the following command:
data-movement status
4. Verify the recall operation completed successfully.
Run the following command:
vtl tape show all barcode T00001L3
Processing tapes....
Barcode Pool Location State Size Used (%) Comp
Modification Time
-------- -------------- ---------------- ----- ----- ---------------- ----
-------------------
T00001L3 cloud-vtl-pool cloud-vtl slot 1 RW 5 GiB 5.0 GiB (99.07%) 239x
2017/05/05 10:41:41
-------- -------------- ---------------- ----- ----- ---------------- ----
-------------------
(S) Tape selected for migration to cloud. Selected tapes will move to cloud on the next
data-movement run.
(R) Recall operation is in progress for the tape.
Item Description
Item Description
If you select View All Access Groups, you are taken to the Fibre Channel view.
From the More Tasks menu, you can create or delete a group.
broadcast a set of LUNs in the event of primary path failure (this requires manual
intervention). A setting of none is used in the case where you do not wish to advertize
selected LUNs. The presentation of LUNs depends on the SAN topology in question.
The initiators in the access group interact with the LUN devices that are added to the
group.
The maximum LUN accepted when creating an access group is 16383.
A LUN can be used only once for an individual group. The same LUN can be used with
multiple groups.
Some initiators (clients) have specific rules for target LUN numbering; for example,
requiring LUN 0 or requiring contiguous LUNs. If these rules are not followed, an initiator
may not be able to access some or all of the LUNs assigned to a DD VTL target port.
Check your initiator documentation for special rules, and if necessary, alter the device
LUNs on the DD VTL target port to follow the rules. For example, if an initiator requires
LUN 0 to be assigned on the DD VTL target port, check the LUNs for devices assigned to
ports, and if there is no device assigned to LUN 0, change the LUN of a device so it is
assigned to LUN 0.
d. In the Primary and Secondary Endpoints area, select an option to determine from which
ports the selected device will be seen. The following conditions apply for designated
ports:
l all – The checked device is seen from all ports.
l none – The checked device is not seen from any port.
l select – The checked device is to be seen from selected ports. Select the checkboxes
of the appropriate ports.
If only primary ports are selected, the checked device is visible only from primary
ports.
If only secondary ports are selected, the checked device is visible only from
secondary ports. Secondary ports can be used if the primary ports become
unavailable.
The switchover to a secondary port is not an automatic operation. You must manually
switch the DD VTL device to the secondary ports if the primary ports become
unavailable.
The port list is a list of physical port numbers. A port number denotes the PCI slot and a
letter denotes the port on a PCI card. Examples are 1a, 1b, or 2a, 2b.
A drive appears with the same LUN on all the ports that you have configured.
e. Select OK.
You are returned to the Devices dialog box where the new group is listed. To add more
devices, repeat these five substeps.
7. Select Next.
8. Select Close when the Completed status message is displayed.
CLI Equivalent
# vtl group add VTL_Group vtl NewVTL changer lun 0 primary-port all secondary-port all
# vtl group add VTL_Group vtl NewVTL drive 1 lun 1 primary-port all secondary-port all
# vtl group add SetUp_Test vtl SetUp_Test drive 3 lun 3 primary-port endpoint-fc-0
secondary-port endpoint-fc-1
Initiators:
Initiator Alias Initiator WWPN
--------------- -----------------------
tsm6_p23 21:00:00:24:ff:31:ce:f8
--------------- -----------------------
Devices:
Device Name LUN Primary Ports Secondary Ports In-use Ports
------------------ --- ------------- --------------- -------------
SetUp_Test changer 0 all all all
SetUp_Test drive 1 1 all all all
SetUp_Test drive 2 2 5a 5b 5a
SetUp_Test drive 3 3 endpoint-fc-0 endpoint-fc-1 endpoint-fc-0
------------------ --- ------------- --------------- -------------
d. In the Primary and Secondary Ports area, change the option that determines the ports
from which the selected device is seen. The following conditions apply for designated
ports:
l all – The checked device is seen from all ports.
l none – The checked device is not seen from any port.
l select – The checked device is seen from selected ports. Select the checkboxes of
the ports from which it will be seen.
If only primary ports are selected, the checked device is visible only from primary
ports.
If only secondary ports are selected, the checked device is visible only from
secondary ports. Secondary ports can be used if primary ports become unavailable.
The switchover to a secondary port is not an automatic operation. You must manually
switch the DD VTL device to the secondary ports if the primary ports become
unavailable.
The port list is a list of physical port numbers. A port number denotes the PCI slot, and a
letter denotes the port on a PCI card. Examples are 1a, 1b, or 2a, 2b.
A drive appears with the same LUN on all ports that you have configured.
e. Select OK.
CLI Equivalent
Item Description
Item Description
Primary Endpoints Initial (or default) endpoint used by backup application. In the
event of a failure on this endpoint, the secondary endpoints
may be used, if available.
Item Description
From the More Tasks menu, with a group selected, you can configure that group, or set endpoints
in use.
Results
NDMP is now configured, and the TapeServer access group shows the device configuration. See
the ndmpd chapter of the Data Domain Operating System Command Reference Guide for the
complete command set and options.
Item Description
Online Endpoints Group name where ports are seen by initiator. Displays None
or Offline if the initiator is unavailable.
Item Description
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
Status DD VTL link status, which is either Online (capable of
handling traffic) or Offline.
Configure Resources
Selecting Configure Resources takes you to the Fibre Channel area, where you can configure
endpoints and initiators.
Item Description
Selecting Configure Initiators takes you to the Fibre Channel area, where you can configure
endpoints and initiators.
CLI Equivalent
# vtl initiator show
Initiator Group Status WWNN WWPN Port
--------- --------- ------ ----------------------- ----------------------- ----
tsm6_p1 tsm3500_a Online 20:00:00:24:ff:31:ce:f8 21:00:00:24:ff:31:ce:f8 10b
--------- --------- ------ ----------------------- ----------------------- ----
Item Description
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
NPIV NPIV status of this endpoint: eithe Enabled or Disabled.
Item Description
Item Description
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
Link Status Link status of this endpoint: either Online or Offline.
Configure Endpoints
Selecting Configure Endpoints takes you to the Fibre Channel area, where you can change any of
the above information for the endpoint.
CLI Equivalent
# scsitarget endpoint show list
Endpoint System Address Transport Enabled Status
-------- -------------- --------- ------- ------
endpoint-fc-0 5a FibreChannel Yes Online
endpoint-fc-1 5b FibreChannel Yes Online
Item Description
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
Item Description
Item Description
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
LInk Status Link status of this endpoint: either Online or Offline.
Item Description
Item Description
Item Description
Item Description
Size The total configured data capacity of tapes in the pool, in GiB
(Gibibytes base-2 equivalent of GB, Gigabytes).
Item Description
Physical Used The amount of space used on virtual tapes in the pool.
Cloud Data Movement Policy The data movement policy that governs migration of DD VTL
data to DD Cloud Tier storage.
Item Description
Remote Source Contains an entry only if the pool is replicated from another
DD system.
From the More Tasks menu, you can create and delete pools, as well as search for tapes.
Creating pools
You can create backward-compatible pools, if necessary for your setup, for example, for
replication with a pre-5.2 DD OS system.
Procedure
1. Select Pools > Pools.
2. Select More Tasks > Pool > Create.
3. In the Create Pool dialog, enter a Pool Name, noting that a pool name:
l cannot be “all,” “vault,” or “summary.”
l cannot have a space or period at its beginning or end.
l is case-sensitive.
4. If you want to create a directory pool (which is backward compatible with the previous
version of DD System Manager), select the option “Create a directory backwards
compatibility mode pool. ” However, be aware that the advantages of using an MTree pool
include the ability to:
l make individual snapshots and schedule snapshots.
l apply retention locks.
l set an individual retention policy.
l get compression information.
l get data migration policies to the Retention Tier.
l establish a storage space usage policy (quota support) by setting hard limits and soft
limits.
CLI Equivalent
Deleting pools
Before a pool can be deleted, you must have deleted any tapes contained within it. If replication is
configured for the pool, the replication pair must also be deleted. Deleting a pool corresponds to
renaming the MTree and then deleting it, which occurs at the next cleaning process.
Procedure
1. Select Pools > Pools > pool.
2. Select More Tasks > Pool > Delete.
3. In the Delete Pools dialog, select the checkbox of items to delete:
l The name of each pool, or
l Pool Names, to delete all pools.
Item Description
Convert to MTree Pool Select this button to convert a Directory pool to an MTree
pool.
Capacity The total configured data capacity of tapes in the pool, in GiB
(Gibibytes, base-2 equivalent of GB, Gigabytes).
Logical Used The amount of space used on virtual tapes in the pool.
Item Description
Pool type (%) VTL Pool and Cloud (if applicable), with the current
percentage of data in parentheses.
Table 179 Pool Tab: Cloud Data Movement - Cloud Data Movement Policy
Item Description
Tape tab
Item Description
Select for Cloud Movea Schedule the selected tapes for migration to DD Cloud Tier.
Unselect from Cloud Movea Remove the selected tapes from the schedule for migration
to DD Cloud Tier.
Recall Cloud Tapes Recall the selected tapes from DD Cloud Tier.
Move to Cloud Now Migrate the selected tapes to DD Cloud Tier without waiting
for the next scheduled migration.
a. This option is only available if the data movement policy is configured for manual selection.
Item Description
Item Description
Replication tab
Item Description
You can also select the Replication Detail button, at the top right, to go directly to the Replication
information panel for the selected pool.
From either the Virtual Tape Libraries or Pools area, from the More Tasks menu, you can create,
delete, move, copy, or search for a tape in the pool.
From the Pools area, from the More Tasks menu, you can rename or delete a pool.
2. With the directory pool you wish to convert highlighted, choose Convert to MTree Pool.
3. Select OK in the Convert to MTree Pool dialog.
4. Be aware that conversion affects replication in the following ways:
l DD VTL is temporarily disabled on the replicated systems during conversion.
l The destination data is copied to a new pool on the destination system to preserve the
data until the new replication is initialized and synced. Afterward, you may safely delete
this temporarily copied pool, which is named CONVERTED-pool, where pool is the name
of the pool that was upgraded (or the first 18 characters for long pool names). [This
applies only to DD OS 5.4.1.0 and later.]
l The target replication directory will be converted to MTree format. [This applies only to
DD OS 5.2 and later.]
l Replication pairs are broken before pool conversion and re-established afterward if no
errors occur.
l DD Retention Lock cannot be enabled on systems involved in MTree pool conversion.
2. In the Move Tapes dialog, enter information to search for the tapes to move, and select
Search:
Pool Select the name of the pool where the tapes reside. If no pools have been
created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to import a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character
and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are
Page 15, 30, and 45.
Items Shows the number of tapes selected across multiple pages – updated
Selected automatically for each tape selection.
Location Select either a library or the Vault for locating the tape. While tapes always
show up in a pool (under the Pools menu), they are technically in either a library
or the vault, but not both, and they are never in two libraries at the same time.
Use the import/export options to move tapes between the vault and a library.
Pool To copy tapes between pools, select the name of the pool where the tapes
currently reside. If no pools have been created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to import a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character and
* matches 0 or more characters.
Count Enter the maximum number of tapes you want to be imported. If you leave this
blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15,
Page 30, and 45.
Items Shows the number of tapes selected across multiple pages – updated
Selected automatically for each tape selection.
4. From the Select Destination: Pool list, select the pool where tapes are to be copied. If a tape
with a matching barcode already resides in the destination pool, an error is displayed, and
the copy aborts.
5. Select Next.
6. From the Copy Tapes Between Pools dialog, verify the summary information and the tape
list, and select Submit.
7. Select Close on the Copy Tapes Between Pools Status window.
Renaming pools
A pool can be renamed only if none of its tapes is in a library.
Procedure
1. Select Pools > Pools > pool.
2. Select More Tasks > Pool > Rename.
3. In the Rename Pool dialog, enter the new Pool Name, with the caveat that this name:
l cannot be “all,” “vault,” or “summary.”
l cannot have a space or period at its beginning or end.
l is case-sensitive.
DD Replicator overview
DD Replicator provides automated, policy-based, network-efficient, and encrypted replication for
DR (disaster recovery) and multi-site backup and archive consolidation. DD Replicator
asynchronously replicates only compressed, deduplicated data over a WAN (wide area network).
DD Replicator performs two levels of deduplication to significantly reduce bandwidth
requirements: local and cross-site deduplication. Local deduplication determines the unique
segments to be replicated over a WAN. Cross-site deduplication further reduces bandwidth
requirements when multiple sites are replicating to the same destination system. With cross-site
deduplication, any redundant segment previously transferred by any other site, or as a result of a
local backup or archive, will not be replicated again. This improves network efficiency across all
sites and reduces daily network bandwidth requirements up to 99%, making network-based
replication fast, reliable, and cost-effective.
In order to meet a broad set of DR requirements, DD Replicator provides flexible replication
topologies, such as full system mirroring, bi-directional, many-to-one, one-to-many, and cascaded.
In addition, you can choose to replicate either all or a subset of the data on your DD system. For
the highest level of security, DD Replicator can encrypt data being replicated between DD systems
using the standard SSL (Secure Socket Layer) protocol.
DD Replicator scales performance and supported fan-in ratios to support large enterprise
environments.
Before getting started with DD Replicator, note the following general requirements:
l DD Replicator is a licensed product. See your Dell EMC sales representative to purchase
licenses.
l You can usually replicate only between machines that are within two releases of each other, for
example, from 6.0 to 6.2. However, there may be exceptions to this (as a result of atypical
release numbering), so review the tables in the Replication version compatibility section, or
check with your Dell EMC representative.
l If you are unable to manage and monitor DD Replicator from the current version of the DD
System Manager, use the replication commands described in the DD OS Command
Reference Guide.
l Compatibility – If you are using DD systems running different versions of DD OS, review the
next section on Replication Version Compatibility.
l Initial Replication – If the source holds a lot of data, the initial replication operation can take
many hours. Consider putting both DD systems in the same location with a high-speed, low-
latency link. After the first replication, you can move the systems to their intended locations
because only new data will be sent.
l Bandwidth Delay Settings – Both the source and destination must have the same bandwidth
delay settings. These tuning controls benefit replication performance over higher latency links
by controlling the TCP (transmission control protocol) buffer size. The source system can then
send enough data to the destination while waiting for an acknowledgment.
l Only One Context for Directories/Subdirectories – A directory (and its subdirectories) can
be in only one context at a time, so be sure that a subdirectory under a source directory is not
used in another directory replication context.
l Adequate Storage – At a minimum, the destination must have the same amount of space as
the source.
l Destination Empty for Directory Replication – The destination directory must be empty for
directory replication, or its contents no longer needed, because it will be overwritten.
l Security – DD OS requires that port 3009 be open in order to configure secure replication
over an Ethernet connection.
In these tables:
l Each DD OS release includes all releases in that family, for example, DD OS 5.7 includes 5.7.1,
5.7.x, 6.0, etc.
l c = collection replication
l dir = directory replication
l m = MTree replication
l del = delta (low bandwidth optimization) replication
l dest = destination
l src = source
l NA = not applicable
src/ 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6.0 6.1 6.2
dest (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest)
5.1 dir, del c, dir, dir, del, dir, del, dir, del, NA NA NA NA NA NA
(src) del, ma ma ma ma
5.2 dir, del dir, del, c, dir, dir, del, dir, del, dir, del, NA NA NA NA NA
(src) ma del, mb m m m
5.3 NA dir, del, dir, del, c, dir, dir, del, dir, del, NA NA NA NA NA
(src) ma m del, m m m
5.4 NA dir, del, dir, del, dir, del, c, dir, dir, del, dir, del, NA NA NA NA
(src) ma m m del, m m m
5.5 NA NA dir, del, dir, del, dir, del, c, dir, dir, del, dir, del, NA NA NA
(src) m m m del, m m m
5.6 NA NA NA NA dir, del, dir, del, c, dir, dir, del, dir, del, NA NA
(src) m m del, m m m
5.7 NA NA NA NA NA dir, del, dir, del, c, dir, dir, del, dir, del, NA
(src) m m del, m m m
6.0 NA NA NA NA NA NA dir, del, dir, del, c, dir, dir, del, dir, del,
(src) m m del, m m m
src/ 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6.0 6.1 6.2
dest (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest)
5.0 c NA NA NA NA NA NA NA NA NA NA
(src)
5.1 NA c ma mb mb NA NA NA NA NA NA
(src)
5.2 NA ma c, ma ma ma ma NA NA NA NA NA
(src)
5.3 NA mc mc c, m m m NA NA NA NA
(src)
5.4 NA mc mc m c, m m m NA NA NA NA
(src)
src/ 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6.0 6.1 6.2
dest (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest)
5.5 NA NA mc m m c, m m m NA NA NA
(src)
5.6 NA NA NA NA m m c, m m m NA
(src)
5.7 NA NA NA NA NA m m c, m m m NA
(src)
6.0 NA NA NA NA NA NA m m c, m m m
(src)
6.1 NA NA NA NA NA NA NA m m c, m m
(src)
6.2 NA NA NA NA NA NA NA NA m m c, m
(src)
a. File migration is not supported with MTree replication on either the source or destination in this configuration.
b. File migration is not supported with MTree replication on the source in this configuration.
c. File migration is not supported with MTree replication on the destination in this configuration.
src/ 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6.0 6.1 6.2
dest (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest)
Replication types
Replication typically consists of a source DD system (which receives data from a backup system)
and one or more destination DD systems. Each DD system can be the source and/or the
destination for replication contexts. During replication, each DD system can perform normal
backup and restore operations.
Each replication type establishes a context associated with an existing directory or MTree on the
source. The replicated context is created on the destination when a context is established. The
context establishes a replication pair, which is always active, and any data landing in the source will
be copied to the destination at the earliest opportunity. Paths configured in replication contexts
are absolute references and do not change based on the system in which they are configured.
A Data Domain system can be set up for directory, collection, or MTree replication.
l Directory replication provides replication at the level of individual directories.
l Collection replication duplicates the entire data store on the source and transfers that to the
destination, and the replicated volume is read-only.
l MTree replication replicates entire MTrees (that is, a virtual file structure that enables
advanced management). Media pools can also be replicated, and by default (as of DD OS 5.3),
an MTree is created that will be replicated. (A media pool can also be created in backward-
compatibility mode that, when replicated, will be a directory replication context.)
For any replication type, note the following requirements:
l A destination Data Domain system must have available storage capacity that is at least the size
of the expected maximum size of the source directory. Be sure that the destination Data
Domain system has enough network bandwidth and disk space to handle all traffic from
replication sources.
l The file system must be enabled or, based on the replication type, will be enabled as part of the
replication initialization.
l The source must exist.
l The destination must not exist.
l The destination will be created when a context is built and initialized.
l After replication is initialized, ownership and permissions of the destination are always identical
to those of the source.
l In the replication command options, a specific replication pair is always identified by the
destination.
l Both systems must have an active, visible route through the IP network so that each system
can resolve its partner's host name.
The choice of replication type depends on your specific needs. The next sections provide
descriptions and features of these three types, plus a brief introduction to Managed File
Replication, which is used by DD Boost.
Directory replication
Directory replication transfers deduplicated data within a DD file system directory configured as a
replication source to a directory configured as a replication destination on a different system.
With directory replication, a DD system can simultaneously be the source of some replication
contexts and the destination of other contexts. And that DD system can also receive data from
backup and archive applications while it is replicating data.
Directory replication has the same flexible network deployment topologies and cross-site
deduplication effects as managed file replication (the type used by DD Boost).
Here are some additional points to consider when using directory replication:
l Do not mix CIFS and NFS data within the same directory. A single destination DD system can
receive backups from both CIFS clients and NFS clients as long as separate directories are
used for CIFS and NFS.
l Any directory can be in only one context at a time. A parent directory may not be used in a
replication context if a child directory of that parent is already being replicated.
l Renaming (moving) files or tapes into or out of a directory replication source directory is not
permitted. Renaming files or tapes within a directory replication source directory is permitted.
l A destination DD system must have available storage capacity of at least the post-compressed
size of the expected maximum post-compressed size of the source directory.
l When replication is initialized, a destination directory is created automatically.
l After replication is initialized, ownership and permissions of the destination directory are
always identical to those of the source directory. As long as the context exists, the destination
directory is kept in a read-only state and can receive data only from the source directory.
l At any time, due to differences in global compression, the source and destination directory can
differ in size.
Folder Creation Recommendations
Directory replication replicates data at the level of individual subdirectories under /data/col1/
backup.
To provide a granular separation of data you must create, from a host system, other directories
(DirA, DirB, etc.) within the /backup Mtree. Each directory should be based on your environment
and the desire to replicate those directories to another location. You will not replicate the entire /
backup MTree, but instead would set up replication contexts on each subdirectory underneath /
data/col1/backup/ (ex. /data/col1/backup/DirC). The purpose of this threefold:
l It allows control of the destination locations as DirA may go to one site and DirB may go to
another.
l This level of granularity allows management, monitoring, and fault isolation. Each replication
context can be paused, stopped, destroyed, or reported on.
l Performance is limited on a single context. The creation of multiple contexts can improve
aggregate replication performance.
l As a general recommendation, approximately 5 - 10 contexts may be required to distribute
replication load across multiple replication streams. This must be validated against the site
design and the volume and composition of the data at the location.
Note: Recommending a number of contexts is a design-dependent issue, and in some cases,
significant implications are attached to the choices made about segregating data for the
purposes of optimizing replication. Data is usually optimized for the manner in which it will rest
– not in manner with which it will replicate. Keep this in mind when altering a backup
environment.
MTree replication
MTree replication is used to replicate MTrees between DD systems. Periodic snapshots are
created on the source, and the differences between them are transferred to the destination by
leveraging the same cross-site deduplication mechanism used for directory replication. This
ensures that the data on the destination is always a point-in-time copy of the source, with file
consistency. This also reduces replication of churn in the data, leading to more efficient utilization
of the WAN.
While directory replication must replicate every change to the content of the source directory in
order, the use of snapshots with MTree replication enables some intermediate changes to the
source to be skipped. Skipping these changes further reduces the amount of data that is sent over
the network, and therefore reduces replication lag.
With MTree replication, a DD system can be simultaneously the source of some replication
contexts and the destination of other contexts. And that DD system can also receive data from
backup and archive applications while it is replicating data.
MTree replication has the same flexible network deployment topologies and cross-site
deduplication effects as managed file replication (the type used by DD Boost).
Here are some additional points to consider when using MTree replication:
l When replication is initialized, a destination read-only MTree is created automatically.
l Data can be logically segregated into multiple MTrees to promote greater replication
performance.
l Snapshots must be created on source contexts.
l Snapshots cannot be created on a replication destination.
l Snapshots are replicated with a fixed retention of one year; however, the retention is
adjustable on the destination and must be adjusted there.
l Snapshots are not automatically deleted after breaking a replication context, and must be
expired when they are no longer required to prevent the system from filling up. The following
KB articles provide more information:
n Data Domain - Checking for Snapshots that are No Longer Needed , available at https://
support.emc.com/kb/336461.
n Data Domain - Identifying Why a DDR is Filling Up , available at https://support.emc.com/kb/
306203.
n Data Domain - Mtree_replication_resync_Snapshot_retention , available at https://
support.emc.com/kb/446176.
l Replication contexts must be configured on both the source and the destination.
l Replicating DD VTL tape cartridges (or pools) simply means replicating MTrees or directories
that contain DD VTL tape cartridges. Media pools are replicated by MTree replication, as a
default. A media pool can be created in backward-compatibility mode and can then be
replicated via directory-based replication. You cannot use the pool:// syntax to create
replication contexts using the command line. When specifying pool-based replication in DD
System Manager, either directory or MTree replication will be created, based on the media
pool type.
l Replicating directories under an MTree is not permitted.
l A destination DD system must have available storage capacity of at least the post-compressed
size of the expected maximum post-compressed size of the source MTree.
l After replication is initialized, ownership and permissions of the destination MTree are always
identical to those of the source MTree. If the context is configured, the destination MTree is
kept in a read-only state and can receive data only from the source MTree.
l At any time, due to differences in global compression, the source and destination MTree can
differ in size.
l MTree replication is supported from DD Extended Retention systems to non-DD Extended
Retention systems if both are running DD OS 5.5 or later.
l DD Retention Lock Compliance is supported with MTree replication, by default. If DD
Retention Lock is licensed on a source, the destination must also have a DD Retention Lock
license, or replication will fail. (To avoid this situation, you must disable DD Retention Lock.) If
DD Retention Lock is enabled on a replication context, a replicated destination context will
always contain data that is retention locked.
Collection replication
Collection replication performs whole-system mirroring in a one-to-one topology, continuously
transferring changes in the underlying collection, including all of the logical directories and files of
the DD file system.
Collection replication does not have the flexibility of the other types, but it can provide higher
throughput and support more objects with less overhead, which may work better for high-scale
enterprise cases.
Collection replication replicates the entire /data/col1 area from a source DD system to a
destination DD system.
Note: Collection replication is not supported for cloud-tier enabled systems.
Here are some additional points to consider when using collection replication:
l No granular replication control is possible. All data is copied from the source to the destination
producing a read-only copy.
l Collection replication requires that the storage capacity of the destination system be equal to,
or greater than, the capacity of the source system. If the destination capacity is less than the
source capacity, the available capacity on the source is reduced to the capacity of the
destination.
l The DD system to be used as the collection replication destination must be empty before
configuring replication. After replication is configured, this system is dedicated to receive data
from the source system.
l With collection replication, all user accounts and passwords are replicated from the source to
the destination. However, as of DD OS 5.5.1.0, other elements of configuration and user
settings of the DD system are not replicated to the destination; you must explicitly reconfigure
them after recovery.
l Collection replication is supported with DD Secure Multitenancy (SMT). Core SMT
information, contained in the registry namespace, including the tenant and tenant-unit
definitions with matching UUIDs is automatically transferred during replication operation.
However, the following SMT information is not automatically included for replication, and must
be configured manually on the destination system:
n Alert notification lists for each tenant-unit
n All users assigned to the DD Boost protocol for use by SMT tenants, if DD Boost is
configured on the system
n The default-tenant-unit associated with each DD Boost user, if any, if DD Boost is
configured on the system
Using collection replication for disaster recovery with SMT on page 439 describes how to
manually configure these items on the replication destination.
l DD Retention Lock Compliance supports collection replication.
l Collection replication is not supported in cloud tier-enabled systems.
l With collection replication, data in a replication context on the source system that has not
been replicated cannot be processed for file system cleaning. If file system cleaning cannot
complete because the source and destination systems are out of sync, the system reports the
cleaning operation status as partial, and only limited system statistics are available for the
cleaning operation. If collection replication is disabled, the amount of data that cannot be
processed for file system cleaning increases because the replication source and destination
systems remain out of sync. The KB article Data Domain: An overview of Data Domain File System
(DDFS) clean/garbage collection (GC) phases, available from the Online Support site at https://
support.emc.com, provides additional information.
l To enhance throughput in a high bandwidth environment, run the replication modify
<destination> crepl-gc-gw-optim command to disable collection replication bandwidth
optimization.
passphrases must also match. The parameters are checked during the replication association
phase.
During collection replication, the source transmits the data in encrypted form, and also
transmits the encryption keys to the destination. The data can be recovered at the destination
because the destination has the same passphrase and the same system encryption key.
Note: Collection replication is not supported for cloud-tier enabled systems.
l MTree or directory replication does not require encryption configuration to be the same at
both the source and destination. Instead, the source and destination securely exchange the
destination’s encryption key during the replication association phase, and the data is re-
encrypted at the source using the destination’s encryption key before transmission to the
destination.
If the destination has a different encryption configuration, the data transmitted is prepared
appropriately. For example, if the feature is turned off at the destination, the source decrypts
the data, and it is sent to the destination un-encrypted.
l In a cascaded replication topology, a replica is chained among three Data Domain systems. The
last system in the chain can be configured as a collection, MTree, or directory. If the last
system is a collection replication destination, it uses the same encryption keys and encrypted
data as its source. If the last system is an MTree or directory replication destination, it uses its
own key, and the data is encrypted at its source. The encryption key for the destination at
each link is used for encryption. Encryption for systems in the chain works as in a replication
pair.
Replication topologies
DD Replicator supports five replication topologies (one-to-one, one-to-one bidirectional, one-to-
many, many-to-one, and cascaded). The tables in this section show (1) how these topologies work
with three types of replication (MTree, directory, and collection) and two types of DD systems
[single node (SN) and DD Extended Retention] and (2) how mixed topologies are supported with
cascaded replication.
In general:
l Single node (SN) systems support all replication topologies.
l Single node-to-single node (SN -> SN) can be used for all replication types.
l DD Extended Retention systems cannot be the source for directory replication.
l Collection replication cannot be configured from either a single node (SN) system to a DD
Extended Retention-enabled system, nor from a DD Extended Retention-enabled system to an
SN system.
l Collection replication cannot be configured from either an SN system to a DD high availability-
enabled system, nor from a DD high availability-enabled system to an SN system.
l For MTtree and Directory replication, DD high availability systems are treated like SN systems.
l Collection replication cannot be configured if any or both systems have Cloud Tier enabled.
In this table:
l SN = single node DD system (no DD Extended Retention)
l ER = DD Extended Retention system
Cascaded replication supports mixed topologies where the second leg in a cascaded connection is
different from the first type in a connection (for example, A -> B is directory replication, and B ->
C is collection replication).
Mixed Topologies
SN – Dir Repl -> ER – MTree Repl -> ER – SN – Dir Repl -> ER – Col Repl -> ER – Col
MTree Repl Repl
SN – MTree Repl -> SN – Col Repl -> SN – SN – MTree Repl -> ER – Col Repl -> ER –
Col Repl Col Repl
One-to-one replication
The simplest type of replication is from a DD source system to a DD destination system, otherwise
known as a one-to-one replication pair. This replication topology can be configured with directory,
MTree, or collection replication types.
Figure 18 One-to-one replication pair
Bi-directional replication
In a bi-directional replication pair, data from a directory or MTree on DD system A is replicated to
DD system B, and from another directory or MTree on DD system B to DD system A.
Figure 19 Bi-directional replication
One-to-many replication
In one-to-many replication, data flows from a source directory or MTree on one DD system to
several destination DD systems. You could use this type of replication to create more than two
copies for increased data protection, or to distribute data for multi-site usage.
Figure 20 One-to-many replication
Many-to-one replication
In many-to-one replication, whether with MTree or directory, replication data flows from several
source DD systems to a single destination DD system. This type of replication can be used to
provide data recovery protection for several branch offices on a corporate headquarter’s IT
system.
Figure 21 Many-to-one replication
Cascaded replication
In a cascaded replication topology, a source directory or MTree is chained among three DD
systems. The last hop in the chain can be configured as collection, MTree, or directory replication,
depending on whether the source is directory or MTree.
For example, DD system A replicates one or more MTrees to DD system B, which then replicates
those MTrees to DD system C. The MTrees on DD system B are both a destination (from DD
system A) and a source (to DD system C).
Data recovery can be performed from the non-degraded replication pair context. For example:
l In the event DD system A requires recovery, data can be recovered from DD system B.
l In the event DD system B requires recovery, the simplest method is to perform a replication
resync from DD system A to (the replacement) DD system B. In this case, the replication
context from DD system B to DD system C should be broken first. After the DD system A to
DD system B replication context finishes resync, a new DD system B to DD System C context
should be configured and resynced.
Managing replication
You can manage replication using the Data Domain System Manager (DD System Manager) or the
Data Domain Operating System (DD OS) Command Line Interface (CLI).
About this task
To use a graphical user interface (GUI) to manage replication, log in to the DD System Manager.
Procedure
1. From the menu at the left of the DD System Manager, select Replication. If your license
has not been added yet, select Add License.
2. Select Automatic or On-Demand (you must have a DD Boost license for on-demand).
CLI Equivalent
You can also log in at the CLI:
login as: sysadmin
Data Domain OS 6.0.x.x-12345
Replication status
Replication Status shows the system-wide count of replication contexts exhibiting a warning
(yellow text) or error (red text) state, or if conditions are normal.
Summary view
The Summary view lists the configured replication contexts for a DD system, displaying
aggregated information about the selected DD system – that is, summary information about the
inbound and outbound replication pairs. The focus is the DD system, itself, and the inputs to it and
outputs from it.
The Summary table can be filtered by entering a Source or Destination name, or by selecting a
State (Error, Warning, or Normal).
Item Description
Source System and path name of the source context, with format
system.path. For example, for directory dir1 on system
dd120-22, you would see dd120-22.chaos.local/data/
col1/dir1.
Destination System and path name of destination context, with format
system.path. For example, for MTree MTree1 on system
dd120-44, you would see dd120-44.chaos.local/data/
col1/MTree1.
Type Type of context: MTree, directory (Dir), or Pool.
Completion Time (Est.) Value is either Completed, or the estimated amount of time
required to complete the replication data transfer based on the
last 24 hours’ transfer rate.
Item Description
Connection Port System name and listen port used for replication connection.
Item Description
Item Description
Completion Time (Est.) Value is either Completed or the estimated amount of time
required to complete the replication data transfer based on the
last 24 hours’ transfer rate.
Files Remaining (Directory Replication Only) Number of files that have not yet
been replicated.
Item Description
l DD Retention Lock
l DD Encryption at Rest
l DD Encryption over Wire
l Available Space
l Low Bandwidth Optimization
l Compression Ratio
l Low Bandwidth Optimization Ratio
Completion Predictor
The Completion Predictor is a widget for tracking a backup job's progress and for predicting when
replication will complete, for a selected context.
Procedure
1. In the Create Pair dialog, select Add System.
2. For System, enter the hostname or IP address of the system to be added.
3. For User Name and Password, enter the sysadmin's user name and password.
4. Optionally, select More Options to enter a proxy IP address (or system name) of a system
that cannot be reached directly. If configured, enter a custom port instead of the default
port 3009.
Note: IPv6 addresses are supported only when adding a DD OS 5.5 or later system to a
management system using DD OS 5.5 or later.
5. Select OK.
Note: If the system is unreachable after adding it to DD System Manager, make sure
that there is a route from the managing system to the system being added. If a
hostname (either a fully qualified domain name (FQDN) or non-FQDN) is entered, make
sure it is resolvable on the managed system. Configure a domain name for the managed
system, ensure a DNS entry for the system exists, or ensure an IP address to hostname
mapping is defined.
6. If the system certificate is not verified, the Verify Certificate dialog shows details about the
certificate. Check the system credentials. Select OK if you trust the certificate, or select
Cancel.
be necessary to set up host files to ensure that contexts are defined on non-resolving (cross-
over) interfaces.
l You can “reverse” the context for an MTree replication, that is, you can switch the destination
and the source.
l Subdirectories within an MTree cannot be replicated, because the MTree, in its entirety, is
replicated.
l MTree replication is supported from DD Extended Retention-enabled systems to non-DD
Extended Retention-enabled systems, if both are running DD OS 5.5 or later.
l The destination DD system must have available storage capacity of at least the post-
compressed size of the expected maximum post-compressed size of the source directory or
MTree.
l When replication is initialized, a destination directory is created automatically.
l A DD system can simultaneously be the source for one context and the destination for another
context.
Procedure
1. In the Create Pair dialog, select Directory, MTree (default), or Pool from the Replication
Type menu.
2. Select the source system hostname from the Source System menu.
3. Select the destination system hostname from the Destination System menu.
4. Enter the source path in the Source Path text box (notice the first part of the path is a
constant that changes based on the type of replication chosen).
5. Enter the destination path in the Destination Path text box (notice the first part of the path
is a constant that changes based on the type of replication chosen).
6. If you want to change any host connection settings, select the Advanced tab.
7. Select OK.
The Replication from the source to the destination begins.
Test results from Data Domain returned the following guidelines for estimating the time
needed for replication initialization.
These are guidelines only and may not be accurate in specific production environments.
l Using a T3 connection, 100ms WAN, performance is about 40 MiB/sec of pre-
compressed data, which gives data transfer of:
40 MiB/sec = 25 seconds/GiB = 3.456 TiB/day
l Using the base-2 equivalent of gigabit LAN, performance is about 80 MiB/sec of pre-
compressed data, which gives data transfer of about double the rate for a T3 WAN.
Here is an example of creating MTree replication pairs at the CLI. In this example, the
source Data Domain system is dd640 and the destination Data Domain system is dlh5.
For details about usage in other scenarios, see the Data Domain Operating System
Command Reference Guide.
CLI Equivalent
# replication disable {destination | all}
CLI Equivalent
Before running this command, always run the filesys disable command. Then,
afterward, run the filesys enable command
# replication break {destination | all}
Certain situations may arise in which you must resynchronize replication to resolve an issue.
For information about breaking and resynchronizing replication, see the KB article Break and
Resync Directory Replication, available at https://support.emc.com/kb/180668.
to use less bandwidth or to replicate and protect more of their data over existing
networks. Low bandwidth optimization must be enabled on both the source and
destination DD systems. If the source and destination have incompatible low bandwidth
optimization settings, low bandwidth optimization will be inactive for that context. After
enabling low bandwidth optimization on the source and destination, both systems must
undergo a full cleaning cycle to prepare the existing data, so run filesys clean
start on both systems. The duration of the cleaning cycle depends on the amount of
data on the DD system, but takes longer than a normal cleaning. For more information on
the filesys commands, see the Data Domain Operating System Command Reference
Guide.
Important: Low bandwidth optimization is not supported if the DD Extended Retention
software option is enabled on either DD system. It is also not supported for Collection
Replication.
CLI Equivalent
#replication modify <destination> connection-host <new-host-name> [port
<port>]
2. Select the checkbox of one or more contexts to abort from the list.
3. Select OK.
After you finish
As soon as possible, you should restart recovery on the source.
CLI Equivalent
# replication resync destination
DD Boost view
The DD Boost view provides configuration and troubleshooting information to NetBackup
administrators who have configured DD systems to use DD Boost AIR (Automatic Image
Replication) or any DD Boost application that uses managed file replication.
See the Data Domain Boost for OpenStorage Administration Guide for DD Boost AIR configuration
instructions.
The File Replication tab displays:
l Currently Active File Replication:
n Direction (Out-Going and In-Coming) and the number of files in each.
n Remaining data to be replicated (pre-compressed value in GiB) and the amount of data
already replicated (pre-compressed value in GiB).
n Total size: The amount of data to be replicated and the already replicated data (pre-
compressed value in GiB).
l Most Recent Status: Total file replications and whether completed or failed
n during the last hour
n over the last 24 hours
l Remote Systems:
n Select a replication from the list.
n Select the time period to be covered from the menu.
n Select Show Details for more information about these remote system files.
The Storage Unit Associations tab displays the following information, which you can use for audit
purposes or to check the status of DD Boost AIR events used for the storage unit's image
replications:
l A list of all storage unit Associations known to the system. The source is on the left, and the
destination is on the right. This information shows the configuration of AIR on the Data Domain
system.
l The Event Queue is the pending event list. It shows the local storage unit, the event ID, and
the status of the event.
An attempt is made to match both ends of a DD Boost path to form a pair and present this as one
pair/record. If the match is impossible, for various reasons, the remote path will be listed as
Unresolved.
Item Description
Item Description
Item Description
Pre-Comp Replicated Amount of pre-compressed outbound and inbound data (in GiB).
Performance view
The Performance view displays a graph that represents the fluctuation of data during replication.
These are aggregated statistics of each replication pair for this DD system.
l Duration (x-axis) is 30 days by default.
l Replication Performance (y-axis) is in GibiBytes or MebiBytes (the binary equivalents of
GigaBytes and MegaBytes).
l Network In is the total replication network bytes entering the system (all contexts).
l Network Out is the total replication network bytes leaving the system (all contexts).
l For a reading of a specific point in time, hover the cursor over a place on the graph.
l During times of inactivity (when no data is being transferred), the shape of the graph may
display a gradually descending line, instead of an expected sharply descending line.
Network Settings
l Bandwidth – Displays the configured data stream rate if bandwidth has been configured, or
Unlimited (default) if not. The average data stream to the replication destination is at least
98,304 bits per second (12 KiB).
l Delay – Displays the configured network delay setting (in milliseconds) if it has been
configured, or None (default) if not.
l Listen Port – Displays the configured listen port value if it has been configured, or 2051
(default) if not.
5. Select OK to set the schedule. The new schedule is shown under Permanent Schedule.
Results
Replication runs at the given rate until the next scheduled change, or until a new throttle setting
forces a change.
l You can determine the actual bandwidth and the actual network delay values for each server
by using the ping command.
l The default network parameters in a restorer work well for replication in low latency
configurations, such as a local 100Mbps or 1000Mbps Ethernet network, where the latency
round-trip time (as measured by the ping command) is usually less than 1 millisecond. The
defaults also work well for replication over low- to moderate-bandwidth WANs, where the
latency may be as high as 50-100 milliseconds. However, for high-bandwidth high-latency
networks, some tuning of the network parameters is necessary.
The key number for tuning is the bandwidth-delay number produced by multiplying the
bandwidth and round-trip latency of the network. This number is a measure of how much data
can be transmitted over the network before any acknowledgments can return from the far end.
If the bandwidth-delay number of a replication network is more than 100,000, then replication
performance benefits from setting the network parameters in both restorers.
Procedure
1. Select Replication > Advanced Settings > Change Network Settings to display the
Network Settings dialog.
2. In the Network Settings area, select Custom Values.
3. Enter Delay and Bandwidth values in the text boxes. The network delay setting is in
milliseconds, and bandwidth is in bytes per second.
4. In the Listen Port area, enter a new value in the text box. The default IP Listen Port for a
replication destination for receiving data streams from the replication source is 2051. This is
a global setting for the DD system.
5. Select OK. The new settings appear in the Network Settings table.
Monitoring replication
The DD System Manager provides many ways to track the status of replication – from checking
replication pair status, to tracking backup jobs, to checking performance, to tracking a replication
process.
When specifying an IP version, use the following command to check its setting:
# replication show config rctx://2
CTX: 2
Source: mtree://ddbeta1.dallasrdc.com/data/col1/EDM1
Destination: mtree://ddbeta2.dallasrdc.com/data/col1/EDM_ipv6
Connection Host: ddbeta2-ipv6.dallasrdc.com
Connection Port: (default)
Ipversion: ipv6
Low-bw-optim: disabled
Encryption: disabled
Enabled: yes
Propagate-retention-lock: enabled
Replication lag
The amount of time between two copies of data is known as replication lag.
You can measure the replication lag between two contexts with the replication status command.
For information about determining the cause of replication lag and mitigating its impact, see the
KB article Troubleshooting Replication Lag, available at https://support.emc.com/kb/180482.
Replication with HA
Floating IP addresses allow HA systems to specify a single IP address for replication configuration
that will work regardless of which node of the HA pair is active.
Over IP networks, HA systems use a floating IP address to provide data access to the Data Domain
HA pair, regardless of which physical node is the active node. The net config command provides
the [type {fixed | floating}] option to configure a floating IP address. The Data Domain
Operating System Command Reference Guide provides more information.
If a domain name is needed to access the floating IP address, specify the HA system name as the
domain name. Run the ha status command to locate the HA system name.
Note: Run the net show hostname type ha-system command to display the HA system
name, and if required, run the net set hostname ha-system command to change the HA
system name.
All file system access should be through the floating IP address. When configuring backup and
replication operations on an HA pair, always specify the floating IP address as the IP address for
the Data Domain system. Data Domain features such as DD Boost and replication will accept the
floating IP address for the HA pair the same way as they accept the system IP address for a non-
HA system.
Replication between HA and non-HA systems
If you want to set up a replication between a high-availability (HA) system and a system running
DD OS 5.7.0.3 or earlier, you must create and manage that replication on the HA system if you
want to use the DD System Manager graphical user interface (GUI).
However, you can perform replications from a non-HA system to an HA system using the CLI as
well as from the HA system to the non-HA system.
Collection replication between HA and non-HA systems is not supported. Directory or MTree
replication is required to replicate data between HA and non-HA systems.
Note: Although you can use the graphical user interface (GUI) for this operation, it is
recommended you use the Command Line Interface (CLI) for optimal performance.
Note: This command might take longer than expected to complete. Do not press Ctrl-C
during this process; if you do, you will cancel the D2M migration.
Phase 1 of 4 (precheck):
Marking source directory /backup/dir1 as read-only...Done.
Phase 2 of 4 (sync):
Syncing directory replication context...0 files flushed.
current=45 sync_target=47 head=47
current=45 sync_target=47 head=47
Done. (00:09)
Phase 3 of 4 (fastcopy):
Starting fastcopy from /backup/dir1 to /data/col1/mtree1...
Waiting for fastcopy to complete...(00:00)
Fastcopy status: fastcopy /backup/dir1 to /data/col1/mtree1: copied
24
files, 1 directory in 0.13 seconds
Creating snapshot 'REPL-D2M-mtree1-2015-12-07-14-54-02'...Done
Phase 4 of 4 (initialize):
Initializing MTree replication context...
(00:08) Waiting for initialize to start...
2. Begin ingesting data to the MTree on the source DD system when the migration process is
complete.
3. (Optional) Break the directory replication context on the source and target systems.
See the Data Domain Operating System Version 6.0 Command Reference Guide for more
information about the replication break command.
Troubleshooting D2M
If you encounter a problem setting directory-to-MTree (D2M) replication, there is an operation
you can perform to address several different issues.
About this task
The dir-to-mtree abort procedure can help cleanly abort the D2M process. You should run
this procedure in the following cases:
l The status of the D2M migration is listed as aborted.
l The Data Domain system rebooted during D2M migration.
l An error occurred when running the replication dir-to-mtree start command.
l Ingest was not stopped before beginning migration.
l The MTree replication context was initialized before the replication dir-to-mtree
start command was entered.
Note: Do not run replication break on the MTree replication context before the D2M
process finishes.
Always run replication dir-to-mtree abort before running the replication break
command on the mrepl ctx.
Running the replication break command prematurely will permanently render the drepl
source directory as read-only.
If this occurs, please contact Support.
Procedure
1. Enter replication dir-to-mtree abort to abort the process.
2. Break the newly created MTree replication context on both the source and destination Data
Domain systems.
In the following example, the MTree replication context is
rctx://2
.
3. Delete the corresponding MTrees on both the source and destination systems.
Note: MTrees marked for deletion remain in the file system until the filesys clean
command is run.
See the Data Domain Operating System Version 6.0 Command Reference Guide for more
information.
4. Run the filesys clean start command on both the source and destination systems.
For more information on the filesys clean commands, see the Data Domain Operating
System Version 6.0 Command Reference Guide.
5. Restart the process.
See Performing migration from directory replication to MTree replication.
Management-User:
User Role
------ ------------
tu1_ta tenant-admin
tu1_tu tenant-user
tum_ta tenant-admin
------ ------------
Management-Group:
Group Role
------ ------------
qatest tenant-admin
------ ------------
DDBoost:
Name Pre-Comp (GiB) Status User Tenant-Unit
---- -------------- ------ ----- -----------
su1 2.0 RW/Q ddbu1 tu1
---- -------------- ------ ----- -----------
Q : Quota Defined
RO : Read Only
RW : Read Write
Mtrees:
Quota:
Tenant-unit: tu1
Mtree Pre-Comp (MiB) Soft-Limit (MiB) Hard-Limit(MiB)
-------------- -------------- ---------------- ----------------
/data/col1/m1 0 71680 81920
/data/col1/su1 2048 30720 51200
-------------- -------------- ---------------- ----------------
Alerts:
Tenant-unit: "tu1"
Notification list "tu1_grp"
Members
------------------
[email protected]
------------------
5. If DD Boost is configured, assign each user listed in the DD Boost section of the smt
tenant-unit show detailed output to the default tenant-unit shown, if any, in the
output.
# ddboost user option set ddbu1 default-tenant-unit tu1
6. Create a new alert notification group with the same name as the alert notification group in
the Alerts section of the smt tenant-unit show detailed output.
# alert notify-list create tu1_grp tenant-unit tu1
7. Assign each email address in the alert notification group in the Alerts section of the smt
tenant-unit show detailed output to the new alert notification group.
# alert notify-list add tu1_grp emails [email protected]
Multi-Tenancy
Multi-Tenancy refers to the hosting of an IT infrastructure by an internal IT department, or an
external service provider, for more than one consumer/workload (business unit/department/
Tenant) simultaneously. Data Domain SMT enables Data Protection-as-a-Service.
RBAC (role-based access control)
RBAC offers multiple roles with different privilege levels, which combine to provide the
administrative isolation on a multi-tenant Data Domain system. (The next section will define these
roles.)
Storage Unit
A Storage Unit is an MTree configured for the DD Boost protocol. Data isolation is achieved by
creating a Storage Unit and assigning it to a DD Boost user. The DD Boost protocol permits access
only to Storage Units assigned to DD Boost users connected to the Data Domain system.
Tenant
A Tenant is a consumer (business unit/department/customer) who maintains a persistent
presence in a hosted environment.
Tenant Self-Service
Tenant Self-Service is a method of letting a Tenant log in to a Data Domain system to perform
some basic services (add, edit, or delete local users, NIS groups, and/or AD groups). This reduces
the bottleneck of always having to go through an administrator for these basic tasks. The Tenant
can access only their assigned Tenant Units. Tenant Users and Tenant Admins will, of course, have
different privileges.
Tenant Unit
A Tenant Unit is the partition of a Data Domain system that serves as the unit of administrative
isolation between Tenants. Tenant units that are assigned to a tenant can be on the same or
different Data Domain systems and are secured and logically isolated from each other, which
ensures security and isolation of the control path when running multiple Tenants simultaneously on
the shared infrastructure. Tenant Units can contain one or more MTrees, which hold all
configuration elements that are needed in a multi-tenancy setup. Users, management-groups,
notification-groups, and other configuration elements are part of a Tenant Unit.
Similarly, data access and data flow (into and out of Tenant Units) can be restricted to a fixed set
of local or remote data access IP address(es). The use of assigned data access IP address(es)
enhances the security of the DD Boost and NFS protocols by adding SMT-related security checks.
For example, the list of storage units returned over DD Boost RPC can be limited to those which
belong to the Tenant Unit with the assigned local data access IP address. For NFS, access and
visibility of exports can be filtered based on the local data access IP address(es) configured. For
example, using showmount -e from the local data access IP address of a Tenant Unit will only
display NFS exports belonging to that Tenant Unit.
The sysadmin must use smt tenant-unit data-ip to add and maintain data access IP
address(es) for Tenant Units.
Note: If you attempt to mount an MTree in an SMT using a non-SMT IP address, the operation
will fail.
If multiple Tenant Units are belong to the same tenant, they can share a default gateway.
However, if multiple Tenant Units that belong to different tenants are oprevented from using the
same default gateway.
Multiple Tenant Units belonging to the same tenant can share a default gateway. Tenant Units that
belong to different tenants cannot use the same default gateway.
application for the Tenant and monitoring resources and statistics within the assigned Tenant Unit.
The tenant-admin can view audit logs, but RBAC ensures that only audit logs from the Tenant
Unit(s) belonging to the tenant-admin are accessible. In addition, tenant-admins ensure
administrative separation when Tenant self-service mode is enabled. In the context of SMT, the
tenant-admin is usually referred to as the backup admin.
tenant-user role
A user with a tenant-user role can monitor the performance and usage of SMT components only
on Tenant Unit(s) assigned to them and only when Tenant self-service is enabled, but a user with
this role cannot view audit logs for their assigned Tenant Units. In addition, tenant-users may run
the show and list commands.
none role
A user with a role of none is not allowed to perform any operations on a Data Domain system other
than changing their password and accessing data using DD Boost. However, after SMT is enabled,
the admin can select a user with a none role from the Data Domain system and assign them an
SMT-specific role of tenant-admin or tenant-user. Then, that user can perform operations on SMT
management objects.
management groups
BSPs (backup service providers) can use management groups defined in a single, external AD
(active directory) or NIS (network information service) to simplify managing user roles on Tenant
Units. Each BSP Tenant may be a separate, external company and may use a name-service such as
AD or NIS.
With SMT management groups, the AD and NIS servers are set up and configured by the admin in
the same way as SMT local users. The admin can ask their AD or NIS administrator to create and
populate the group. The admin then assigns an SMT role to the entire group. Any user within the
group who logs in to the Data Domain system is logged in with the role that is assigned to the
group.
When users leave or join a Tenant company, they can be removed or added to the group by the AD
or NIS administrator. It is not necessary to modify the RBAC configuration on a Data Domain
system when users who are part of the group are added or removed.
Tenant-unit Name
Enter tenant-unit name to be created
: SMT_5.7_tenant_unit
Invalid tenant-unit name.
Enter tenant-unit name to be created
: SMT_57_tenant_unit
Do you want to add a local management ip to this tenant-unit? (yes|no) [no]: yes
Choose an ip from above table or enter a new ip address. New ip addresses will need
to be created manually.
Ip Address
Enter the local management ip address to be added to this tenant-unit
: 192.168.10.57
Do you want to add another local management ip to this tenant-unit? (yes|no) [no]:
Do you want to add another remote management ip to this tenant-unit? (yes|no) [no]:
Do you want to create a mtree for this tenant-unit now? (yes|no) [no]: yes
MTree Name
Enter MTree name
: SMT_57_tenant_unit
Invalid mtree path name.
Enter MTree name
:
SMT_57_tenant_unit
MTree Soft-Quota
Enter the quota soft-limit to be set on this MTree (<n> {MiB|GiB|TiB|PiB}|none)
:
MTree Hard-Quota
Enter the quota hard-limit to be set on this MTree (<n> {MiB|GiB|TiB|PiB}|none)
:
Do you want to assign another MTree to this tenant-unit? (yes|no) [no]: yes
Do you want to create another mtree for this tenant-unit? (yes|no) [no]:
Do you want to configure a management user for this tenant-unit? (yes|no) [no]:
Do you want to configure a management group for this tenant-unit (yes|no) [no]: yes
Management-Group Name
Enter the group name to be assigned to this tenant-unit
: SMT_57_tenant_unit_group
Management-Group Type
What type do you want to assign to this group (nis|active-directory)?
: nis
Do you want to configure another management user for this tenant-unit? (yes|no) [no]:
Do you want to configure another management group for this tenant-unit? (yes|no) [no]:
Alert Configuration
Configuration complete.
Storage Unit. The backup application is granted access to the Storage Unit only if the user
credentials presented by the backup application match the user names associated with the
Storage Unit. If user credentials and user names do not match, the job fails with a permission error.
Modifying quotas
To meet QoS criteria, a system administrator uses DD OS “knobs” to adjust the settings required
by the Tenant configuration. For example, the administrator can set “soft” and “hard” quota limits
on DD Boost Storage Units. Stream “soft” and “hard” quota limits can be allocated only to DD
Boost Storage Units assigned to Tenant Units. After the administrator sets the quotas, the tenant-
admin can monitor one or all Tenant Units to ensure no single object exceeds its allocated quotas
and deprives others of system resources.
Action: This alert is expected after loss of AC (main power) event. If this
shutdown is not expected and persists, contact your contracted support provider
or visit us online at https://my.datadomain.com.
Tenant description: The system has experienced an unexpected power loss and has
restarted.
Tenant action: This alert is generated when the system restarts after a power
loss. If this alert repeats, contact your System Administrator.
Managing snapshots
A snapshot is a read-only copy of an MTree captured at a specific point in time. A snapshot can be
used for many things, for example, as a restore point in case of a system malfunction. The required
role for using snapshot is admin or tenant-admin.
To view snapshot information for an MTree or a Tenant Unit:
# snapshot list mtree mtree-path | tenant-unit tenant-unit
Supported platforms
Cloud Tier is supported on physical platforms that have the necessary memory, CPU, and storage
connectivity to accommodate another storage tier.
DD Cloud Tier is supported on these systems:
a. The minimum metadata size is a hard limit. Data Domain recommends users start with 1 TB for metadata storage and
expand in 1 TB increments. The Data Domain Virtual Edition Installation and Administration Guide provides more details
about using DD Cloud Tier with DD VE.
Note: DD Cloud Tier is supported with Data Domain High Availability (HA). Both nodes must be
running DD OS 6.0 (or higher), and they must be HA-enabled.
Note: DD Cloud Tier is not supported on any system that is not listed and is not supported on
any system with the Extended Retention feature enabled or configured with Collection
Replication.
Note: The Cloud Tier feature may consume all available bandwidth in a shared WAN link,
especially in a low bandwidth configuration (1 Gbps), and this may impact other applications
sharing the WAN link. If there are shared applications on the WAN, the use of QoS or other
network limiting is recommended to avoid congestion and ensure consistent performance over
time.
If bandwidth is constrained, the rate of data movement will be slow and you will not be able to
move as much data to the cloud. It is best to use a dedicated link for data going to the Cloud
Tier.
Note: Do not send traffic over onboard management network interface controllers (ethMx
interfaces).
l If physical capacity reporting feature is enabled and is scheduled Seeding mode migration
suspends capacity reporting feature, for the duration of Seeding based migration.
l Migration in Seeding mode is only supported on all cloud enabled Data Domain systems and
configurations that have more than 80 Gb of RAM. Seeding based migration is disabled by
default for DD VEs.
Large object size
DD Cloud Tier uses object sizes of 1 MB or 4 MB (depending on the cloud storage provider) to
reduce the metadata overhead, and lower the number of objects to migrate to cloud storage.
Proxy settings
If there are any existing proxy settings that cause data above a certain size to be rejected, those
settings must be changed to allow object sizes up to 4.5MB.
If customer traffic is being routed through a proxy, the self-signed/CA-signed proxy certificate
must be imported. See "Importing CA certificates" for details.
OpenSSL cipher suites
l Ciphers - ECDHE-RSA-AES256-SHA384, AES256-GCM-SHA384
l TLS Version: 1.2
Note: Default communication with all cloud providers is initiated with strong cipher.
Supported protocols
l HTTP
l HTTPS
Note: Default communication with all public cloud providers occurs on secure HTTP (HTTPS),
but you can overwrite the default setting to use HTTP.
Importing CA certificates
Before you can add cloud units for Alibaba, Amazon Web Services S3 (AWS), Azure, Elastic Cloud
Storage (ECS), and Google Cloud Provider (GCP), you must import CA certificates.
Before you begin
For AWS and Azure public cloud providers, root CA certificates can be downloaded from https://
www.digicert.com/digicert-root-certificates.htm.
l For an AWS cloud provider, download the Baltimore CyberTrust Root certificate.
l For an Azure cloud provider, download the Baltimore CyberTrust Root certificate.
l For ECS, the root certificate authority varies by customer.
Implementing cloud storage on ECS requires a load balancer. If an HTTPS endpoint is used as
an endpoint in the configuration, be sure to import the root CA certificate. Contact your load
balancer provider for details.
l For an S3 Flexible provider, import the root CA certificate. Contact your S3 Flexible provider
for details.
If your downloaded certificate has a .crt extension, it is likely that it will need to be converted to a
PEM-encoded certificate. If so, use OpenSSL to convert the file from .crt format to .pem (for
example, openssl x509 -inform der -in BaltimoreCyberTrustRoot.crt -out
BaltimoreCyberTrustRoot.pem).
l For Alibaba:
1. Download the GlobalSign Root R1 certificate from https://support.globalsign.com/
customer/portal/articles/1426602-globalsign-root-certificates.
2. Convert the downloaded certificate to a PEM-encoded format. The OpenSSL command for
this conversion is: openssl x509 -inform der -in <root_cert.crt> -out
<root_cert.pem>.
3. Import the certificate to the system.
l For GCP:
1. Download the GlobalSign Root R2 certificate from https://support.globalsign.com/
customer/portal/articles/1426602-globalsign-root-certificates.
2. Convert the downloaded certificate to a PEM-encoded format. The OpenSSL command for
this conversion is: openssl x509 -inform der -in <root_cert.crt> -out
<root_cert.pem>.
3. Import the certificate to the system.
Procedure
1. Select Data Management > File System > Cloud Units.
2. In the tool bar, click Manage Certificates.
The Manage Certificates for Cloud dialog is displayed.
3. Click Add.
4. Select one of these options:
l I want to upload the certificate as a .pem file.
Browse to and select the certificate file.
l I want to copy and paste the certificate text.
n Copy the contents of the .pem file to your copy buffer.
n Paste the buffer into the dialog.
5. Click Add.
By default, ECS runs the S3 protocol on port 9020 for HTTP and 9021 for HTTPS. With a
load balancer, these ports are sometimes remapped to 80 for HTTP and 443 for HTTPS,
respectively. Check with your network administrator for the proper ports.
8. If an HTTP proxy server is required to get around a firewall for this provider, click Configure
for HTTP Proxy Server.
Enter the proxy hostname, port, user, and password.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
9. Click Add.
The File System main window now displays summary information for the new cloud unit as
well a control for enabling and disabling the cloud unit.
The Alibaba Cloud user credentials must have permissions to create and delete buckets and to add,
modify, and delete files within the buckets they create. AliyunOSSFullAccess is preferred, but
these are the minimum requirements:
l ListBuckets
l GetBucket
l PutBucket
l DeleteBucket
l GetObject
l PutObject
l DeleteObject
Procedure
1. Select Data Management > File System > Cloud Units.
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider account.
4. For Cloud provider, select Alibaba Cloud from the drop-down list.
5. Select Standard or IA from the Storage class drop-down list.
6. Select the region from the Storage region drop-down list.
7. Enter the provider Access key as password text.
8. Enter the provider Secret key as password text.
9. Ensure that port 443 (HTTPS) is not blocked in firewalls. Communication with the Alibaba
cloud provider occurs on port 443.
10. If an HTTP proxy server is required to get around a firewall for this provider, click Configure
for HTTP Proxy Server.
Enter the proxy hostname, port, user, and password.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
Note: The AWS user credentials must have permissions to create and delete buckets and to
add, modify, and delete files within the buckets they create. S3FullAccess is preferred, but
these are the minimum requirements:
l CreateBucket
l ListBucket
l DeleteBucket
l ListAllMyBuckets
l GetObject
l PutObject
l DeleteObject
Procedure
1. Select Data Management > File System > Cloud Units.
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider account.
4. For Cloud provider, select Amazon Web Services S3 from the drop-down list.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
us-central1 Iowa
us-west1 Oregon
europe-west1 Belgium
europe-west2 London
europe-west3 Frankfurt
europe-west4 Netherlands
asia-northeast1 Tokyo
asia-south1 Mumbai
asia-southeast1 Singapore
The Google Cloud Provider user credentials must have permissions to create and delete buckets
and to add, modify, and delete files within the buckets they create. These are the minimum
requirements:
l ListBucket
l PutBucket
l GetBucket
l DeleteBucket
l GetObject
l PutObject
l DeleteObject
Note:
DD Cloud Tier only supports Nearline and is selected automatically during setup.
Procedure
1. Select Data Management > File System > Cloud Units.
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider account.
4. For Cloud provider, select Google Cloud Storage from the drop-down list.
5. Enter the provider Access key as password text.
6. Enter the provider Secret key as password text.
7. Storage class is set as Nearline by default.
If a multi-regional location is selected (Asia, EU or US), then the storage class and the
location constraint is Nearline Multi-regional. All other regional locations have the storage
class set as Nearline Regional.
8. Select the Region.
9. Ensure that port 443 (HTTPS) is not blocked in firewalls. Communication with Google Cloud
Provider occurs on port 443.
10. If an HTTP proxy server is required to get around a firewall for this provider, click Configure
for HTTP Proxy Server.
Enter the proxy hostname, port, user, and password.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
Procedure
1. Select Data Management > File System > Cloud Units.
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider account.
4. For Cloud provider, select Flexible Cloud Tier Provider Framework for S3 from the drop-
down list.
5. Enter the provider Access key as password text.
6. Enter the provider Secret key as password text.
7. Specify the appropriate Storage region.
8. Enter the provider Endpoint in this format: http://<ip/hostname>:<port>. If you are
using a secure endpoint, use https instead.
9. For Storage class, select the appropriate storage class from the drop-down list.
10. Ensure that port 443 (HTTPS) is not blocked in firewalls. Communication with the S3 cloud
provider occurs on port 443.
11. If an HTTP proxy server is required to get around a firewall for this provider, click Configure
for HTTP Proxy Server.
Enter the proxy hostname, port, user, and password.
Note: There is an optional step to run the cloud provider verify tool before adding the
cloud unit. This tool performs pre-check tests to ensure that all requirements are met
before to adding the actual cloud unit.
5. For Secret key, enter the new provider secret key as password text.
6. For Primary key, enter the new provider primary key as password text.
Note: Modifying the primary key is only supported for Azure environments.
7. If an HTTP proxy server is required to get around a firewall for this provider, click Configure
for HTTP Proxy Server.
8. Click OK.
Wait for cleaning to complete. The cleaning may take time depending on how much data is
present in the cloud unit.
4. Disable the file system.
5. Use the following CLI command to delete the cloud unit.
# cloud unit del unit-name
Data movement
Data is moved from the active tier to the cloud tier as specified by your individual data movement
policy. The policy is set on a per-MTree basis. Data movement can be initiated manually or
automatically using a schedule.
Procedure
1. Select Data Management > MTree.
2. In the top panel, select the MTree to which you want to add a data movement policy.
3. Click the Summary tab.
4. Under Data Movement Policy click Add.
5. For File Age in Days, set the file age threshold (Older than) and optionally, the age range
(Younger than).
Note: The minimum number of days for Older than is 14. For nonintegrated backup
applications, files moved to the cloud tier cannot be accessed directly and need to be
recalled to the active tier before you can access them. So, choose the age threshold
value as appropriate to minimize or avoid the need to access a file moved to the cloud
tier.
between two runs. If the cloud unit becomes available and you cannot wait for the next
scheduled run, you can start data movement manually.
Note: If a file resides only in a snapshot, it cannot be recalled directly. To recall a file in a
snapshot, use fastcopy to copy the file from the snapshot back to the active MTree, then
recall the file from the cloud. A file can only be recalled from the cloud to an active MTree.
Procedure
1. Select Data Management > File System > Summary.
2. Do one of the following:
l In the Cloud Tier section of the Space Usage panel, click Recall.
l Expand the File System status panel at the bottom of the screen and click Recall.
Note: The Recall link is available only if a cloud unit is created and has data.
3. In the Recall File from Cloud dialog, enter the exact file name (no wildcards) and full path of
the file to be recalled, for example: /data/col1/mt11/file1.txt. Click Recall.
4. To check the status of the recall, do one of the following:
l In the Cloud Tier section of the Space Usage panel, click Details.
l Expand the File System status panel at the bottom of the screen and click Details.
The Cloud File Recall Details dialog is displayed, showing the file path, cloud provider, recall
progress, and amount of data transferred. If there are unrecoverable errors during the recall,
an error message is displayed. Hover the cursor over the error message to display a tool tip
with more details and possible corrective actions.
Results
Once the file has been recalled to the active tier, you can restore the data.
Note: For nonintegrated applications, once a file has been recalled from the cloud tier to the
active tier, a minimum of 14 days must elapse before the file is eligible for data movement.
After 14 days, normal data movement processing will occur for the file. The file now has to wait
the age-threshold or age-range to move back to the cloud as this time the ptime will be
examined rather than the mtime. This restriction does not apply to integrated applications.
Note: For data-movement, nonintegrated applications configure an age-based data movement
policy on the Data Domain system to specify which files get migrated to the cloud tier, and this
policy applies uniformly to all files in an MTree. Integrated applications use an application-
managed data movement policy, which lets you identify specific files to be migrated to the
cloud tier.
a recall before cloud-based backups can be restored. Once a file is recalled, its aging is reset and
will start again from 0, and the file will be eligible based on the age policy set. A file can be recalled
on the source MTree only. Integrated applications can recall a file directly.
About this task
Note: If a file resides only in a snapshot, it cannot be recalled directly. To recall a file in a
snapshot, use fastcopy to copy the file from the snapshot back to the active MTree, then
recall the file from the cloud. A file can only be recalled from the cloud to an active MTree.
Procedure
1. Check the location of the file using:
filesys report generate file-location [path {<path-name> | all}]
[output-file <filename>]
The pathname can be a file or directory; if it is a directory, all files in the directory are listed.
Filename Location
-------- --------
/data/col1/mt11/file1.txt Cloud Unit 1
If the status shows that the recall isn't running for a given path, the recall may have
finished, or it may have failed.
Results
Once the file has been recalled to the active tier, you can restore the data.
Note: For nonintegrated applications, once a file has been recalled from the cloud tier to the
active tier, a minimum of 14 days must elapse before the file is eligible for data movement.
After 14 days, normal data movement processing will occur for the file. This restriction does
not apply to integrated applications.
Note: For data-movement, nonintegrated applications configure an age-based data movement
policy on the Data Domain system to specify which files get migrated to the cloud tier, and this
policy applies uniformly to all files in an MTree. Integrated applications use an application-
managed data movement policy, which lets you identify specific files to be migrated to the
cloud tier.
If the license is not installed, use the elicense update command to install the license.
Enter the command and paste the contents of the license file after this prompt. After
pasting, ensure there is a carriage return, then press Control-D to save. You are
prompted to replace licenses, and after answering yes, the licenses are applied and
displayed.
# elicense update
Enter the content of license file and then press Control-D, or press
Control-C to cancel.
2. Install certificates.
Before you can create a cloud profile, you must install the associated certificates. See
Importing the certificates on page 553 for more information.
For AWS, Virtustream, and Azure public cloud providers, root CA certificates can be
downloaded from https://www.digicert.com/digicert-root-certificates.htm.
l For an AWS or Azure cloud provider, download the Baltimore CyberTrust Root
certificate.
l For Alibaba, Alibaba download the GlobalSign Root R1 certificate from https://
support.globalsign.com/customer/portal/articles/1426602-globalsign-rootcertificates.
l For a Virtustream cloud provider, download the DigiCert High Assurance EV Root CA
certificate.
l For ECS, the root certificate authority will vary by customer. Contact your load balancer
provider for details.
Downloaded certificate files have a .crt extension. Use openssl on any Linux or Unix system
where it is installed to convert the file from .crt format to .pem.
$openssl x509 -inform der -in DigiCertHighAssuranceEVRootCA.crt -out
DigiCertHighAssuranceEVRootCA.pem
$openssl x509 -inform der -in BaltimoreCyberTrustRoot.crt -out
BaltimoreCyberTrustRoot.pem
# adminaccess certificate import ca application cloud
Enter the certificate and then press Control-D, or press Control-C to
cancel.
3. To configure the Data Domain system for data-movement to the cloud, you must first
enable the “cloud” feature and set the system passphrase if it has not already been set.
# cloud enable
Cloud feature requires that passphrase be set on the system.
Enter new passphrase:
Re-enter new passphrase:
Passphrases matched.
The passphrase is set.
Encryption is recommended on the cloud tier.
Do you want to enable encryption? (yes|no) [yes]:
Encryption feature is enabled on the cloud tier.
Cloud feature is enabled.
4. Configure the cloud profile using the cloud provider credentials. The prompts and variables
vary by provider.
# cloud profile add <profilename>
Note: For security reasons, this command does not display the access/secret keys you
enter.
Select the provider:
Enter provider name (alibabacloud|aws|azure|ecs|google|s3_flexible|
virtustream)
l Alibaba Cloud requires access key, secret key, storage class and region.
l AWS S3 requires access key, secret key, storage class, and region.
l Azure requires account name, whether or not the account is an Azure Government
account, primary key, secondary key, and storage class.
l ECS requires entry of access key, secret key and endpoint.
l Google Cloud Platform requires access key, secret key, and region. (Storage class is
Nearline.)
l S3 Flexible providers require the provider name, access key, secret key, region, endpoint,
and storage class.
l Virtustream requires access key, secret key, storage class, and region.
At the end of each profile addition you are asked if you want to set up a proxy. If you do,
these values are required: proxy hostname, proxy port, proxy username, and proxy
password.
5. Verify the cloud profile configuration:
# cloud profile show
Use the cloud unit list command to list the cloud units.
Connectivity Check:
Checking firewall access: PASSED
Validating certificate PASSED
Account Validation:
Creating temporary profile: PASSED
Creating temporary bucket: PASSED
S3 API Validation:
Validating Put Bucket: PASSED
Validating List Bucket: PASSED
Validating Put Object: PASSED
Validating Get Object: PASSED
Validating List Object: PASSED
Validating Delete Object: PASSED
Validating Bulk Delete: PASSED
Cleaning Up:
Deleting temporary bucket: PASSED
Deleting temporary profile: PASSED
12. Configure the file migration policy for this MTree. You can specify multiple MTrees in this
command. The policy can be based on the age threshold or the range.
a. To configure the age-threshold (migrating files older than the specified age to cloud):
# data-movement policy set age-threshold age_in_days to-tier cloud
cloud-unit unitname mtrees mtreename
b. To configure the age-range (migrating only those files that are in the specified age-
range):
# data-movement policy set age-range min-age age_in_days max-age
age_in_days to-tier cloud cloud-unit unitname mtrees mtreename
13. Export the file system, and from the client, mount the file system and ingest data into the
active tier. Change the modification date on the ingested files such that they now qualify for
data migration. (Set the date to older than the age-threshold value specified when
configuring the data-movement policy.)
14. Initiate file migration of the aged files. Again, you can specify multiple MTrees with this
command.
# data-movement start mtrees mtreename
15. Verify that file migration worked and the files are now in the cloud tier:
# filesys report generate file-location path all
16. Once you have migrated a file to the cloud tier, you cannot directly read from the file
(attempting to do so results in an error). The file can only be recalled back to the active tier.
To recall a file to the active tier:
# data-movement recall path pathname
3. Enter the security officer Username and Password. Optionally, check Restart file system
now.
4. Click Enable or Disable, as appropriate.
5. In the File System Lock panel, lock or unlock the file system.
6. In the Key Management panel, click Configure.
7. In the Change Key Manager dialog, configure security officer credentials and the key
manager.
Note: Cloud encryption is allowed only through the Data Domain Embedded Key
Manager. External key managers are not supported.
8. Click OK.
9. Use the DD Encryption Keys panel to configure encryption keys.
Directory replication only works on the /backup MTree, and this MTree cannot be assigned to the
Cloud Tier. So, directory replication is not affected by Cloud Tier.
Managed file replication and MTree replication are supported on Cloud Tier enabled Data Domain
systems. One or both systems can have Cloud Tier enabled. If the source system is Cloud Tier
enabled, data may need to be read from the cloud if the file was already migrated to the Cloud
Tier. A replicated file is always placed first in the Active Tier on the destination system even when
Cloud Tier is enabled. A file can be recalled from the Cloud Tier back to the Active Tier on the
source MTree only. Recall of a file on the destination MTree is not allowed.
Note: If the source system is running DD OS 5.6 or 5.7 and replicating into a Cloud Tier
enabled system using MTree replication, the source system must be upgraded to a release that
can replicate to a Cloud Tier enabled system. Please see the DD OS Release Notes system
requirements.
Note: Files in the Cloud Tier cannot be used as base files for virtual synthetic operations. The
incremental forever or synthetic full backups need to ensure that the files remain in the Active
Tier if they will be used in virtual synthesis of new backups.
Procedure
1. Disable the file system.
# filesys disable
ok, proceeding.
Please wait..............
The filesystem is now disabled.
ok, proceeding.
ok, proceeding.
6. Run the cloud unit list command to verify that neither cloud unit appears.
Contact Support if one or both cloud units still display with the status Delete-Pending.
7. Identify the disk enclosures that are assigned to DD Cloud Tier.
# storage show tier cloud
Domain shelf types cannot be mixed in the same shelf set, and the shelf sets must be balanced
according to the configuration rules specified in the ES30 Expansion Shelf Hardware Guide
orDS60 Expansion Shelf Hardware Guide. With DD Extended Retention, youcan attach
significantly more storage to the same controller. For example, you can attach up to a
maximum of 56 ES30 shelves on a DD990 with DD Extended Retention. The active tier must
include storage consisting of at least one shelf. For the minimum and maximum shelf
configuration for the Data Domain controller models, refer to the expansion shelf hardware
guides for ES30 and DS60.
Data Protection
On a DD Extended Retention-enabled DD system, data is protected with built-in fault isolation
features, disaster recovery capability, and DIA (Data Invulnerability Architecture). DIA checks files
when they are moved from the active to the retention tier. After data is copied into the retention
tier, the container and file system structures are read back and verified. The location of the file is
updated, and the space on the active tier is reclaimed after the file is verified to have been
correctly written to the retention tier.
When a retention unit is filled up, namespace information and system files are copied into it, so the
data in the retention unit may be recovered even when other parts of the system are lost.
Note: Sanitization and some forms of Replication are not supported for DD Extended
Retention-enabled DD systems.
Space Reclamation
To reclaim space that has been freed up by data moved to the retention tier, you can use Space
Reclamation (as of DD OS 5.3), which runs in the background as a low-priority activity. It suspends
itself when there are higher priority activities, such as data movement and cleaning.
Encryption of Data at Rest
As of DD OS 5.5.1, you can use the Encryption of Data at Rest feature on DD Extended Retention-
enabled DD systems, if you have an encryption license. Encryption is not enabled by default.
This is an extension of the encryption capability already available, prior to DD OS 5.5.1, for systems
not using DD Extended Retention.
Refer to the Managing Encryption of Data at Rest chapter in this guide for complete instructions
on setting up and using the encryption feature.
Policy on the destination system then determines when the replicated data is moved to the
retention tier.
About this task
Note that MTree replication restrictions and policies vary by DD OS release, as follows:
l As of DD OS 5.1, data can be replicated from a non-DD Extended Retention-enabled system to
a DD Extended Retention-enabled system with MTree replication.
l As of DD OS 5.2, data can be protected within an active tier by replicating it to the active tier
of a DD Extended Retention-enabled system.
l As of DD OS 5.5, MTree replication is supported from a DD Extended Retention-enabled
system to a non-DD Extended Retention-enabled system if both are running DD OS 5.5 or
later.
l For DD OS 5.3 and 5.4, if you plan to enable DD Extended Retention, do not set up replication
for the /backup MTree on the source machine. (DD OS 5.5 and later do not have this
restriction.)
DD990
l 256 GB of RAM
l 1 - NVRAM IO module (2 GB)
l 4 - Quad-port SAS IO modules
l 2 - 1 GbE ports on the motherboard
l 0 to 4 - 1 GbE NIC IO cards for external connectivity
l 0 to 3 - 10 GbE NIC cards for external connectivity
l 0 to 3 - Dual-Port FC HBA cards for external connectivity
l 0 to 3 - Combined NIC and FC cards, not to exceed three of any one specific IO module
l 1 to 56 - ES20 or ES30 shelves (1, 2, or 3 TB disks), not to exceed the system maximum usable
capacity of 570 TB
If DD Extended Retention is enabled on a DD990, the maximum usable storage capacity of the
active tier is 570 TB. The retention tier can have a maximum usable capacity of 570 TB. The active
and retention tiers have a total usable storage capacity of 1140 TB.
DD4200
l 128 GB of RAM
l 1 - NVRAM IO module (4 GB)
l 4 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 6 - 1/10 GbE NIC cards for external connectivity
l 0 to 6 - Dual-Port FC HBA cards for external connectivity
l 0 to 6 - Combined NIC and FC cards, not to exceed four of any one specific IO module
l 1 to 16 - ES30 SAS shelves (2 or 3 TB disks), not to exceed the system maximum usable
capacity of 192 TB. ES30 SATA shelves (1, 2, or 3 TB disks) are supported for system
controller upgrades.
If DD Extended Retention is enabled on a DD4200, the maximum usable storage capacity of the
active tier is 192 TB. The retention tier can have a maximum usable capacity of 192 TB. The active
and retention tiers have a total usable storage capacity of 384 TB. External connectivity is
supported for DD Extended Retention configurations up to 16 shelves.
DD4500
l 192 GB of RAM
l 1 - NVRAM IO module (4 GB)
l 4 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 6 - 1/10 GbE NIC IO cards for external connectivity
l 0 to 6 - Dual-Port FC HBA cards for external connectivity
l 0 to 5 - Combined NIC and FC cards, not to exceed four of any one specific IO module
l 1 to 20 - ES30 SAS shelves (2 or 3 TB disks), not to exceed the system maximum usable
capacity of 285 TB. ES30 SATA shelves (1 TB, 2 TB, or 3 TB) are supported for system
controller upgrades.
If DD Extended Retention is enabled on a DD4500, the maximum usable storage capacity of the
active tier is 285 TB. The retention tier can have a maximum usable capacity of 285 TB. The active
and retention tiers have a total usable storage capacity of 570 TB. External connectivity is
supported for DD Extended Retention configurations up to 24 shelves.
DD6800
l 192 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 3 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 4 - 1/10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port FC HBA cards for external connectivity
l 0 to 4 - Combined NIC and FC cards
l Shelf combinations are documented in the installation and setup guide for your DD system, and
the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD6800, the maximum usable storage capacity of the
active tier is 288 TB. The retention tier can have a maximum usable capacity of 288 TB. The active
and retention tiers have a total usable storage capacity of 0.6 PB. External connectivity is
supported for DD Extended Retention configurations up to 28 shelves.
DD7200
l 256 GB of RAM
l 1 - NVRAM IO module (4 GB)
l 4 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 6 - 1/10 GbE NIC cards for external connectivity
l 0 to 6 - Dual-Port FC HBA cards for external connectivity
l 0 to 5 - Combined NIC and FC cards, not to exceed four of any one specific IO module
l 1 to 20 - ES30 SAS shelves (2 or 3 TB disks), not to exceed the system maximum usable
capacity of 432 TB. ES30 SATA shelves (1 TB, 2 TB, or 3 TB) are supported for system
controller upgrades.
If DD Extended Retention is enabled on a DD7200, the maximum usable storage capacity of the
active tier is 432 TB. The retention tier can have a maximum usable capacity of 432 TB. The active
and retention tiers have a total usable storage capacity of 864 TB. External connectivity is
supported for DD Extended Retention configurations up to 32 shelves.
DD9300
l 384 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 3 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 4 - 1/10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port FC HBA cards for external connectivity
l 0 to 4 - Combined NIC and FC cards
l Shelf combinations are documented in the installation and setup guide for your DD system, and
the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD9300, the maximum usable storage capacity of the
active tier is 720 TB. The retention tier can have a maximum usable capacity of 720 TB. The active
and retention tiers have a total usable storage capacity of 1.4 PB. External connectivity is
supported for DD Extended Retention configurations up to 28 shelves.
DD9500
l 512 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 4 - Quad-port SAS IO modules
l 1 - Quad 1 GbE ports on the motherboard
l 0 to 4 - 10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port 16 Gbe FC HBA cards for external connectivity
l Shelf combinations are documented in the installation and setup guide for your DD system, and
the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD9500, the maximum usable storage capacity of the
active tier is 864 TB. The retention tier can have a maximum usable capacity of 864 TB. The active
and retention tiers have a total usable storage capacity of 1.7 PB. External connectivity is
supported for DD Extended Retention configurations up to 56 shelves.
DD9800
l 768 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 4 - Quad-port SAS IO modules
l 1 - Quad 1 GbE ports on the motherboard
l 0 to 4 - 10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port 16 Gbe FC HBA cards for external connectivity
l Shelf combinations are documented in the installation and setup guide for your DD system, and
the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD9800, the maximum usable storage capacity of the
active tier is 1008 TB. The retention tier can have a maximum usable capacity of 1008 TB. The
active and retention tiers have a total usable storage capacity of 2.0 PB. External connectivity is
supported for DD Extended Retention configurations up to 56 shelves.
3. Enter one or more licenses, one per line, pressing the Enter key after each one. Click Add
when you have finished. If there are any errors, a summary of the added licenses, and those
not added because of the error, are listed. Select the erroneous License Key to fix it.
Results
The licenses for the DD system are displayed in two groups:
l Software option licenses, which are required for options such as DD Extended Retention and
DD Boost.
l Shelf Capacity Licenses, which display shelf capacity (in TiB), the shelf model (such as ES30),
and the shelf’s storage tier (active or retention).
To delete a license, select the license in the Licenses list, and click Delete Selected Licenses. If
prompted to confirm, read the warning, and click OK to continue.
n See the Data Domain Expansion Shelf Hardware Guide for your shelf model (ES20, ES30, or
DS60).
CLI Equivalent
You can also verify that the Extended Retention license has been installed at the CLI.
To use the legacy licensing method:
# license show
## License Key Feature
-- ------------------- -----------
1 AAAA-BBBB-CCCC-DDDD Replication
2 EEEE-FFFF-GGGG-HHHH VTL
-- ------------------- -----------
If the license is not present, each unit includes documentation – a quick install card –
which shows the licenses that have been purchased. Enter the following command to
populate the license key.
# license add license-code
If the license is not present, update the license file with the new feature license.
# elicense update mylicense.lic
New licenses: Storage Migration
Feature licenses:
## Feature Count Mode Expiration Date
-- ----------------- ----- --------------- ---------------
1 REPLICATION 1 permanent (int) n/a
2 VTL 1 permanent (int) n/a
3 EXTENDED RETENTION 1 permanent (int) n/a
-- ------------------ ----- --------------- ---------------
** This will replace all existing Data Domain licenses on the system with the above EMC
ELMS licenses.
Do you want to proceed? (yes|no) [yes]: yes
eLicense(s) updated.
Create an archive unit, and add it to the file system. You are asked to specify the number of
enclosures in the archive unit:
# filesys archive unit add
Verify that the archive unit is created and added to the file system:
# filesys archive unit list all
3. Click Configure.
4. In the Configure Storage dialog, make sure that Active Tier is displayed as the Configure
selection, and click OK.
5. After the configuration completes, you are returned to the Expand File System Capacity
dialog. Select Finish to complete the active tier expansion.
4. Select the size to expand the retention unit, then click Configure.
5. After configuration completes, you are returned to the Expand File System Capacity dialog.
Click Finish to complete the retention tier expansion.
CLI Equivalent
To enable space reclamation:
# archive space-reclamation start
Previous Cycle:
---------------
Start time : Feb 21 2014 14:17
End time : Feb 21 2014 14:49
Effective run time : 0 days, 00:32.
Percent completed : 00 % (was stopped by user)
Units reclaimed : None
Space freed on target unit : None
Total space freed : None
schedule is every two weeks. File System Cleaning lets you elect not to have a system cleaning
after data movement; however, it is strongly recommended that you leave this option selected.
File Age Threshold per MTree Link
Selecting the File Age Threshold per MTree link will take you from the File System to the MTree
area (also accessible by selecting Data Management > MTree), where you can set a customized
File Age Threshold for each of your MTrees.
Select the MTree, and then select Edit next to Data Movement Policy. In the Modify Age
Threshold dialog, enter a new value for File Age Threshold, and select OK. As of DD OS 5.5.1, the
minimum value is 14 days.
Encryption Tab
The Encryption tab lets you enable or disable Encryption of Data at Rest, which is supported only
for systems with a single retention unit. As of 5.5.1, DD Extended Retention supports only a single
retention unit, so systems set up during, or after, 5.5.1 will have no problem complying with this
restriction. However, systems set up prior to 5.5.1 may have more than one retention unit, but
they will not work with Encryption of Data at Rest until all but one retention unit has been
removed, or data has been moved or migrated to one retention unit.
Space Usage Tab
The Space Usage Tab lets you select one of three chart types [(entire) File System; Active (tier);
Archive (tier)] to view space usage over time in MiB. You can also select a duration value (7, 30,
60, or 120 days) at the upper right. The data is presented (color-coded) as pre-compression
written (blue), post-compression used (red), and the compression factor (black).
Consumption Tab
The Consumption Tab lets you select one of three chart types [(entire) File System; Active (tier);
Archive (tier)] to view the amount of post-compression storage used and the compression ratio
over time, which enables you to view consumption trends. You can also select a duration value (7,
30, 60, or 120 days) at the upper right. The Capacity checkbox lets you choose whether to display
the post-compression storage against total system capacity.
Daily Written Tab
The Daily Written Tab lets you select a duration (7, 30, 60, or 120 days) to see the amount of data
written per day. The data is presented (color-coded) in both graph and table format as pre-
compression written (blue), post-compression used (red), and the compression factor (black).
The system displays a warning telling you that you cannot revert the file system to its
original size after this operation.
5. Click Expand to expand the file system.
You can specify different File Age Thresholds for each defined MTree. An MTree is a subtree
within the namespace that is a logical set of data for management purposes. For example, you
might place financial data, emails, and engineering data in separate MTrees.
To take advantage of the space reclamation feature, introduced in DD OS 5.3, it is recommended
that you schedule data movement and file system cleaning on a bi-weekly (every 14 days) basis. By
default, cleaning is always run after data movement completes. It is highly recommended that you
do not change this default.
Avoid these common sizing errors:
l Setting a Data Movement Policy that is overly aggressive; data will be moved too soon.
l Setting a Data Movement Policy that is too conservative: after the active tier fills up, you will
not be able to write data to the system.
l Having an undersized active tier and then setting an overly aggressive Data Movement Policy
to compensate.
Be aware of the following caveats related to snapshots and file system cleaning:
l Files in snapshots are not cleaned, even after they have been moved to the retention tier.
Space cannot be reclaimed until the snapshots have been deleted.
l It is recommended that you set the File Age Threshold for snapshots to the minimum of 14
days.
Here are two examples of how to set up a Data Movement Policy.
l You could segregate data with different degrees of change into two different MTrees and set
the File Age Threshold to move data soon after the data stabilizes. Create MTree A for daily
incremental backups and MTree B for weekly fulls. Set the File Age Threshold for MTree A so
that its data is never moved, but set the File Age Threshold for MTree B to 14 days (the
minimum threshold).
l For data that cannot be separated into different MTrees, you could do the following. Suppose
the retention period of daily incremental backups is eight weeks, and the retention period of
weekly fulls is three years. In this case, it would be best to set the File Age Threshold to nine
weeks. If it were set lower, you would be moving daily incremental data that was actually soon
to be deleted.
CLI Equivalent
To set the age threshold:
# archive data-movement policy set age-threshold {days|none} mtrees mtree-
list
The current value for Packing data during Retention Tier data movement can be either Enabled
or Disabled. Consult with a system engineer to change this setting.
l The target system must have enough active tier capacity to hold the data from both the Active
and Archive Tiers on the source system, as data will not be moved to DD Cloud Tier storage on
the target system for at least 14 days.
l Data Domain recommends that any capacity planning include enough Active Tier capacity for a
minimum of 14 days of replicated data.
l All backup jobs and other write activities on the source system must be redirected to the
target system.
l The target system must meet all the same compliance requirements that were met by the
source system.
l The customer must provide all appropriate accounts and credentials for the target and source
Data domain systems.
Additional considerations:
l Contact Dell EMC Support if immediate data migration to DD Cloud Tier storage is required.
l Customer backup applications may not track this data migration.
l This procedure does not cover Managed File Replication (MFR).
l Licensing - Data Domain systems can use:
n Legacy licensing - Use the license show command
n ELMS licensing - Use the elicense show command
Data Domain systems using legacy licensing can add licenses incrementally. Be aware that not all
newer features are supported with legacy licensing.
Data Domain systems installed with DD OS 6.0 or later, converted to or upgraded with features
requiring ELMS licensing use the elicense commands when applying and displaying licences, and
when a new license key file is applied the new set of keys replaces all of the old keys entirely.
CAUTION When updating an ELMS license, be sure that you do not remove existing capacity
or features.
About this task
This procedure covers the following uses:
l Customer wants to move data from Archive Tier storage to DD Cloud Tier storage on the
target system.
l Customer wants to move data from Active and Archive Tier storage on the source system to
Active Tier storage on the target system.
l Customer wants to move data from Archive Tier storage on multiple source systems to Active
or DD Cloud Tier storage on the target system.
l Customer wants to re-purpose the source system or its disk enclosures after the migration
operation is complete.
Capacity planning
Before you begin
The target system must have sufficient Active Tier capacity to store the combined Active and
Archive Tiers of the source system.
In addition, the Active Tier of the source system must have enough space to retain all the data
from scheduled backups from the time when data movement to the archive tier is stopped until the
migration from the source system to the target system is complete.
Active Tier:
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
(Reduction %)
------------- -------- --------- ----------- ---------- -------------
Written:
Last 7 days 80730.2 37440.7 1.0x 2.2x 2.2x
(53.6)
Last 24 hrs 80730.2 37440.7 1.0x 2.2x 2.2x
(53.6)
------------- -------- --------- ----------- ---------- -------------
Archive Tier:
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
…
…
Currently Used:*
Pre-Comp Post-Comp Global-Comp Local-Comp Total-Comp
(GiB) (GiB) Factor Factor Factor
…
…
Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100
In this example, the weekly ingest is approximately 37 TB per week, which equates to 5.28
TB per day.
2. On the source system, run the filesys show space command to determine the amount
of free space in the Active Tier.
# filesys show space
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- -------- --------- ---- --------------
/data: pre-comp - 69480.4 - - -
/data: post-comp 30352.2 35.5 30316.7 0% 0.0
/ddvar 47.2 9.2 35.6 21% -
/ddvar/core 984.3 2.0 932.3 0% -
---------------- -------- -------- --------- ---- --------------
Cloud Tier
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB
---------------- -------- -------- --------- ---- -------------
/data: pre-comp - 0.0 - - -
/data: post-comp 0.0 0.0 0.0 0% 0.0
---------------- -------- -------- --------- ---- -------------
Total:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB
---------------- -------- -------- --------- ---- -------------
5. Proceed with the rest of the migration steps after sufficient capacity is available in the
Active Tier of the source system.
Licensing scheme: EMC Electronic License Management System (ELMS) node-locked mode
Capacity licenses:
## Feature Shelf Model Capacity Mode Expiration Date
-- ------------------ ----------- ---------- --------- ---------------
1 CAPACITY-ACTIVE ES30 32.74 TiB permanent n/a
2 SSD-CAPACITY n/a 1.45 TiB permanent n/a
3 CLOUDTIER-CAPACITY n/a 218.27 TiB permanent n/a
-- ------------------ ----------- ---------- --------- ---------------
Licensed Active Tier capacity: 32.74 TiB*
* Depending on the hardware platform, usable filesystem capacities may vary.
Feature licenses:
## Feature Count Mode Expiration Date
-- --------------------------- ----- --------------- ---------------
1 DDBOOST 1 permanent n/a
-- --------------------------- ----- --------------- ---------------
License file last modified at : 2018/06/28 06:29:03.
5. Add the replication license by updating the license key obtained from the licensing portal.
Open the license file in a text editor, then copy and paste it into the update prompt followed
by Ctrl + D.
# elicense update
Enter the content of license file and then press Control-D, or press
Control-C to cancel.
6. Verify the replication license is added on the source system.
# elicense show
System locking-id: APM00000000001
Licensing scheme: EMC Electronic License Management System (ELMS) node-locked mode
Capacity licenses:
## Feature Shelf Model Capacity Mode Expiration Date
-- ------------------ ----------- ---------- --------- ---------------
1 CAPACITY-ACTIVE ES30 32.74 TiB permanent n/a
2 SSD-CAPACITY n/a 1.45 TiB permanent n/a
3 CLOUDTIER-CAPACITY n/a 218.27 TiB permanent n/a
-- ------------------ ----------- ---------- --------- ---------------
Licensed Active Tier capacity: 32.74 TiB*
* Depending on the hardware platform, usable filesystem capacities may vary.
Feature licenses:
## Feature Count Mode Expiration Date
-- --------------------------- ----- --------------- ---------------
1 REPLICATION 1 permanent n/a
2 DDBOOST 1 permanent n/a
-- --------------------------- ----- --------------- ---------------
License file last modified at : 2018/06/28 06:29:03.
Procedure
1. Determine the hostname of the source system.
# hostname
The Hostname is: Source.ER.FQDN
2. Determine the hostname of the target system.
# hostname
The Hostname is: Target.DD.FQDN
3. On the source system, set the MTree replication context to the target system.
# replication add source mtree://Source.ER.FQDN/data/col1/large_files_100gb destination
mtree://Target.DD.FQDN/data/col1/large_files_100gb encryption enabled
Encryption enabled for replication context mtree://Target.DD.FQDN/data/col1/
large_files_100gb
Please verify that replication encryption is also enabled for this context on the remote
host.
4. On the target system, set the MTree replication context to the source system.
# replication add source mtree://Source.ER.FQDN/data/col1/large_files_100gb destination
mtree://Target.DD.FQDN/data/col1/large_files_100gb encryption enabled
Encryption enabled for replication context mtree://Target.DD.FQDN/data/col1/
large_files_100gb
Please verify that replication encryption is also enabled for this context on the remote
host.
5. On the source system, initiate the replication operation. This command does not need to be
run on the target system.
Note: The time required for the replication context to initialize depends on the amount
of data present in the source MTree that is being replicated for the first time.
When a replication operation is complete, the output shows a value of zero in the Post-
comp Bytes Remaining column. The value in the Sync'ed-as-of column displays the
most recent time the source and target systems are in-synch.
2. If replication is still in progress, wait for the operations to complete.
3. Verify the MTree sizes on both the source and target systems match. Run the following
command on both systems.
# mtree list
Name Pre-Comp (GiB) Status
---------------------------- -------------- ------
/data/col1/large_files_100gb 2500.0 RW
---------------------------- -------------- ------
Note: The Archive Tier cannot be disabled. The only way to remove it is to destroy the
file system.
2. Identify the disk enclosures that were attached to the Archive Tier.
# storage show tier archive
Archive tier details:
Disk Disks Count Disk Additional
Group Size Information
------- -------- ----- -------- -----------
dg2 4.1-4.15 15 1.8 TiB
dg3 3.1-3.15 15 1.8 TiB
3. Remove the Archive Tier storage enclosures from the system.
# storage remove enclosures 3
Removing enclosure 3...Enclosure 3 successfully removed.
If the license is not installed, use the elicense update command to install the license.
Enter the command and paste the contents of the license file after this prompt. After
pasting, ensure there is a carriage return, then press Control-D to save. You are
prompted to replace licenses, and after answering yes, the licenses are applied and
displayed.
# elicense update
Enter the content of license file and then press Control-D, or press
Control-C to cancel.
2. Install certificates.
Before you can create a cloud profile, you must install the associated certificates. See
Importing the certificates on page 553 for more information.
For AWS, Virtustream, and Azure public cloud providers, root CA certificates can be
downloaded from https://www.digicert.com/digicert-root-certificates.htm.
l For an AWS or Azure cloud provider, download the Baltimore CyberTrust Root
certificate.
l For Alibaba, Alibaba download the GlobalSign Root R1 certificate from https://
support.globalsign.com/customer/portal/articles/1426602-globalsign-rootcertificates.
l For a Virtustream cloud provider, download the DigiCert High Assurance EV Root CA
certificate.
l For ECS, the root certificate authority will vary by customer. Contact your load balancer
provider for details.
Downloaded certificate files have a .crt extension. Use openssl on any Linux or Unix system
where it is installed to convert the file from .crt format to .pem.
$openssl x509 -inform der -in DigiCertHighAssuranceEVRootCA.crt -out
DigiCertHighAssuranceEVRootCA.pem
$openssl x509 -inform der -in BaltimoreCyberTrustRoot.crt -out
BaltimoreCyberTrustRoot.pem
# adminaccess certificate import ca application cloud
Enter the certificate and then press Control-D, or press Control-C to
cancel.
3. To configure the Data Domain system for data-movement to the cloud, you must first
enable the “cloud” feature and set the system passphrase if it has not already been set.
# cloud enable
Cloud feature requires that passphrase be set on the system.
Enter new passphrase:
Re-enter new passphrase:
Passphrases matched.
The passphrase is set.
Encryption is recommended on the cloud tier.
Do you want to enable encryption? (yes|no) [yes]:
Encryption feature is enabled on the cloud tier.
Cloud feature is enabled.
4. Configure the cloud profile using the cloud provider credentials. The prompts and variables
vary by provider.
# cloud profile add <profilename>
Note: For security reasons, this command does not display the access/secret keys you
enter.
Select the provider:
Enter provider name (alibabacloud|aws|azure|ecs|google|s3_flexible|
virtustream)
l Alibaba Cloud requires access key, secret key, storage class and region.
l AWS S3 requires access key, secret key, storage class, and region.
l Azure requires account name, whether or not the account is an Azure Government
account, primary key, secondary key, and storage class.
l ECS requires entry of access key, secret key and endpoint.
l Google Cloud Platform requires access key, secret key, and region. (Storage class is
Nearline.)
l S3 Flexible providers require the provider name, access key, secret key, region, endpoint,
and storage class.
l Virtustream requires access key, secret key, storage class, and region.
At the end of each profile addition you are asked if you want to set up a proxy. If you do,
these values are required: proxy hostname, proxy port, proxy username, and proxy
password.
5. Verify the cloud profile configuration:
# cloud profile show
Use the cloud unit list command to list the cloud units.
Connectivity Check:
Checking firewall access: PASSED
Validating certificate PASSED
Account Validation:
Creating temporary profile: PASSED
Creating temporary bucket: PASSED
S3 API Validation:
Cleaning Up:
Deleting temporary bucket: PASSED
Deleting temporary profile: PASSED
12. Configure the file migration policy for this MTree. You can specify multiple MTrees in this
command. The policy can be based on the age threshold or the range.
a. To configure the age-threshold (migrating files older than the specified age to cloud):
# data-movement policy set age-threshold age_in_days to-tier cloud
cloud-unit unitname mtrees mtreename
b. To configure the age-range (migrating only those files that are in the specified age-
range):
# data-movement policy set age-range min-age age_in_days max-age
age_in_days to-tier cloud cloud-unit unitname mtrees mtreename
13. Export the file system, and from the client, mount the file system and ingest data into the
active tier. Change the modification date on the ingested files such that they now qualify for
data migration. (Set the date to older than the age-threshold value specified when
configuring the data-movement policy.)
14. Initiate file migration of the aged files. Again, you can specify multiple MTrees with this
command.
# data-movement start mtrees mtreename
15. Verify that file migration worked and the files are now in the cloud tier:
# filesys report generate file-location path all
16. Once you have migrated a file to the cloud tier, you cannot directly read from the file
(attempting to do so results in an error). The file can only be recalled back to the active tier.
To recall a file to the active tier:
# data-movement recall path pathname
DD Retention Lock Governance Edition is supported for on-premises, cloud-based, and DD3300
DD VE instances. DD Retention Lock Compliance Edition is not supported for on-premises, cloud-
based, or DD3300 DD VE instances.
The topics that follow provide additional information on DD Retention Lock.
Files that are written to shares or exports that are not committed to be retained (even if DD
Retention Lock Governance or Compliance is enabled on the MTree containing the files) can be
modified or deleted at any time.
Retention locking prevents any modification or deletion of files under retention from occurring
directly from CIFS shares or NFS exports during the retention period specified by a client-side
atime update command. Some archive applications and backup applications can issue this
command when appropriately configured. Applications or utilities that do not issue this command
cannot lock files using DD Retention Lock.
Retention-locked files are always protected from modification and premature deletion, even if
retention locking is subsequently disabled or if the retention-lock license is no longer valid.
You cannot rename or delete non-empty folders or directories within an MTree that is retention-
lock enabled. However, you can rename or delete empty folders or directories and create new
ones.
The retention period of a retention-locked file can be extended (but not reduced) by updating the
file’s atime.
For both DD Retention Lock Governance and Compliance, once the retention period for a file
expires, the file can be deleted using a client-side command, script, or application. However, the
file cannot be modified even after the retention period for the file expires. The Data Domain
system never automatically deletes a file when its retention period expires.
automatic retention lock settings apply to new files created on the MTree after the retention lock
settings are configured. Existing files are not impacted.
Set the automatic retention period to ensure that every new file created on the MTree will be
automatically locked and retained for the specified amount of time.
Set the automatic lock delay on the MTree to allow a period of time where a new file can be
modified before it gets locked.
Automatic retention lock is subject to the following limitations:
l Retention lock must be re-applied manually to any files reverted when automatic retention lock
is in use.
l MTree replication of an MTree with automatic retention lock enabled to a system with an
earlier version of DD OS that does not support automatic retention lock, results in the locked
files replicating to the target system as regular files.
l In Automatic Retention Lock, for the files which are being ingested, the mtree retention-
lock report generate command may incorrectly report those files as locked as well
report an incorrect cooling off period.
If client-side scripts are used to retention-lock backup files or backup images, and if a backup
application (Veritas NetBackup, for example) is also used on the system via DD Boost, be
aware that the backup application may not share the context of the client-side scripts. Thus,
when a backup application attempts to expire or delete files that were retention locked via the
client-side scripts, space is not released on the Data Domain system.
Data Domain recommends that administrators change their retention period policy to align with
the retention lock time. This applies to many of the backup applications that are integrated
with DD Boost, including Veritas NetBackup, Veritas Backup Exec, and NetWorker.
Setting retention lock during data ingest to a DD BOOST file in DSP mode is not allowed, and
the client setting the RL receives an error. Retention lock should be set after the data ingest is
complete.
Setting retention lock during data ingest to a DD BOOST file in OST mode, or to an NFS file is
not allowed and the client writing the data receives error as soon as RL is set. The partial file
written before RL is set and committed to disk as a worm file.
d. Click Add.
2. Select an MTree for retention locking.
a. Select Data Management > MTree.
b. Select the MTree you want to use for retention locking. You can also create an empty
MTree and add files to it later.
3. Click the MTree Summary tab to display information for the selected MTree.
4. Scroll down to Retention Lock area and click Edit to the right of Retention Lock.
5. Enable DD Retention Lock Governance on the MTree and change the default minimum and
maximum retention lock periods for the MTree, if required.
Perform the following actions in the Modify Retention Lock dialog box:
Note: To check retention lock configuration settings for any MTree, select the MTree in
the Navigation Panel, then click the Summary tab.
2. Set up one or more security officer users accounts according to Role-Base Access Control
(RBAC) rules.
a. In the system administrator role, add a security officer account.
user add user role security
b. Enable the security officer authorization.
authorization policy set security-officer enabled
3. Configure and enable the system to use DD Retention Lock Compliance.
Note: Enabling DD Retention Lock Compliance enforces many restrictions on low-level
access to system functions used during troubleshooting. Once enabled, the only way to
disable DD Retention Lock Compliance is to initialize and reload the system, which
results in destroying all data on the system.
b. After the restart process is complete, enable DD Retention Lock Compliance on the
system.
system retention-lock compliance enable
4. Enable compliance on an MTree that will contain retention-locked files.
mtree retention-lock enable mode compliance mtree mtree-path
Note: Compliance cannot be enabled on /backup or pool MTrees.
5. To change the default minimum and maximum retention lock periods for a compliance-
enabled MTree, type the following commands with security officer authorization.
l mtree retention-lock set min-retention-period period mtree mtree-path
l mtree retention-lock set max-retention-period period mtree mtree-path
Note: The retention period is specified in the format [number] [unit]. For example: 1 min,
1 hr, 1 day, 1 mo, or 1 year. Specifying a minimum retention period of less than 12 hours,
or a maximum retention period longer than 70 years, results in an error.
6. To change the automatic retention period and automatic lock delay for a compliance-
enabled MTree, type the following commands with security officer authorization.
l mtree retention-lock set automatic-retention-period period mtree
mtree-path
Note: The automatic retention period is specified in the format [number] [unit]. For
example: 1 min, 1 hr, 1 day, 1 mo, or 1 year. The value must be between the minimum
and maximum retention periods.
l mtree retention-lock set automatic-lock-delay time mtree mtree-path
Note: The automatic lock delay time is specified in the format [number] [unit]. For
example: 5 min, 2 hr, or 1 day. The value must be between five minutes and seven
days. The default is 120 minutes. If a file is modified before the automatic lock delay
has elapsed, the lock delay time starts over when the file modification is complete.
For example, if the lock delay is 120 minutes and the file is modified after 60 minutes,
the lock delay will start again at 120 minutes after the file is modified.
Repeat steps 4 through 6 to enable additional MTrees.
http://sourceforge.net/projects/unxutils/files/latest
l For Windows Server 2008, Windows Vista Enterprise, Windows Vista Enterprise 64-bit edition,
Windows Vista SP1, Windows Vista Ultimate, and Windows Vista Ultimate 64-bit edition:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23754
l For Windows Server 2003 SP1 and Windows Server 2003 R2:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=20983
Note: The touch command for Windows may have a different format than the Linux examples
in this chapter.
Follow the installation instructions provided and set the search path as needed on the client
machine.
Client Access to Data Domain System Files
After an MTree is enabled for DD Retention Lock Governance or Compliance, you can:
l Create a CIFS share based on the MTree. This CIFS share can be used on a client machine.
l Create an NFS mount for the MTree and access its files from the NFS mount point on a client
machine.
Note: The commands listed in this section are to be used only on the client. They cannot be
issued through the DD System Manager or CLI. Command syntax may vary slightly, depending
on the utility you are using.
The topics that follow describe how to manage client-side retention lock file control.
Note: Some client machines using NFS, but running a legacy OS, cannot set retention time
later than 2038. The NFS protocol doesn’t impose the 2038 limit and allows to specifying
times until 2106. Further, DD OS doesn’t impose the 2038 limit.
Errors are permission-denied errors (referred to as EACCESS, a standard POSIX error). These are
returned to the script or archive application setting the atime.
Note: A file must be completely written to the Data Domain system before it is committed to
be a retention-locked file.
The following command can be used on clients to set the atime:
touch -a -t [atime] [filename]
The format of atime is:
[[YY]YY] MMDDhhmm[.ss]
For example, suppose the current date and time is 1 p.m. on January 18, 2012 (that is,
201201181300), and the minimum retention period is 12 hours. Adding the minimum retention
period of 12 hours to that date and time results in a value of 201201190100. Therefore, if the atime
for a file is set to a value greater than 201201190100, that file becomes retention locked.
The following command:
ClientOS# touch -a -t 201412312230 SavedData.dat
will lock file SavedData.dat until 10:30 p.m. December 31, 2014.
For example, changing the atime from 201412312230 to 202012121230 using the following
command:
ClientOS# touch -a -t 202012121230 SavedData.dat
will cause the file to be locked until 12:30 p.m. December 12, 2020.
Note: Some client machines using NFS, but running a very old OS, cannot set retention time
later than 2038. The NFS protocol doesn’t impose the 2038 limit and allows to specifying
times until 2106. Further, DD OS doesn’t impose the 2038 limit.
Errors are permission-denied errors (referred to as EACCESS, a standard POSIX error). These are
returned to the script or archive application setting the atime.
If the atime of SavedData.dat is 202012121230 (12:30 p.m. December 12, 2020) and the touch
command specifies an earlier atime, 202012111230 (12:30 p.m. December 11, 2020), the touch
command fails, indicating that SavedData.dat is retention-locked.
Note: The --time=atime option is not supported in all versions of Unix.
Note: If the retention period of the retention-locked file has not expired, the delete operation
results in a permission-denied error.
Privileged delete
For DD Retention Lock Governance (only), you can delete retention locked files using this two
step process.
Procedure
1. Use the mtree retention-lock revert path command to revert the retention locked
file.
2. Delete the file on the client system using the rm filename command.
mtime
mtime is the last-modified time of a file. It changes only when the contents of the file change. So,
the mtime of a retention-locked file cannot change.
Replication
Collection replication, MTree replication, and directory replication replicate the locked or unlocked
state of files.
Files that are governance retention locked on the source are governance retention locked on the
destination and have the same level of protection. For replication, the source system must have a
DD Retention Lock Governance license installed—a license is not required on the destination
system.
Replication is supported between systems that are:
l Running the same major DD OS version (for example, both systems are running DD OS
5.5.x.x).
l Running DD OS versions within the next two consecutive higher or lower major releases (for
example, 5.3.x.x to 5.5.x.x or 5.5.x.x to 5.3.x.x). Cross-release replication is supported only for
directory and MTree replication.
Note: MTree replication is not supported for DD OS 5.0 and earlier.
Be aware that:
l Collection replication and MTree replication replicate the minimum and maximum retention
periods configured on MTrees to the destination system.
l Directory replication does not replicate the minimum and maximum retention periods to the
destination system.
The procedure for configuring and using collection, MTree, and directory replication is the same as
for Data Domain systems that do not have a DD Retention Lock Governance license.
Replication Resync
The replication resync destination command tries to bring the destination into sync with the
source when the MTree or directory replication context is broken between destination and source
systems. This command cannot be used with collection replication. Note that:
l If files are migrated to the cloud tier before the context is broken, the MTree replication resync
overwrites all the data on the destination, so you will need to migrate the files to the cloud tier
again.
l If the destination directory has DD Retention Lock enabled, but the source directory does not
have DD Retention Lock enabled, then a resync of a directory replication will fail.
l With Mtree replication, resync will fail if the source MTree does not have retention lock
enabled and the destination MTree has retention lock enabled.
l With Mtree replication, resync will fail if the source and destination MTrees are retention lock
enabled but the propagate retention lock option is set to FALSE.
Fastcopy
When the filesys fastcopy [retention-lock] source src destination dest
command is run on a system with a DD Retention Lock Governance enabled MTree, the command
preserves the retention lock attribute during the fastcopy operation.
Note: If the destination MTree is not retention lock enabled, the retention-lock file attribute is
not preserved.
Filesys destroy
Effects of the filesys destroy command when it is run on a system with a DD Retention Lock
Governance enabled MTree.
l All data is destroyed, including retention-locked data.
l All filesys options are returned to their defaults. This means that retention locking is
disabled and the minimum and maximum retention periods are set back to their default values
on the newly created file system.
Note: This command is not allowed if DD Retention Lock Compliance is enabled on the system.
MTree delete
When the mtree delete mtree-path command attempts to delete a DD Retention Lock
Governance enabled (or previously enabled) MTree that currently contains data, the command
returns an error.
Note: The behavior of mtree delete is a similar to a command to delete a directory—an
MTree with retention lock enabled (or previously enabled) can be deleted only if the MTree is
empty.
Replication
An MTree enabled with DD Retention Lock Compliance can be replicated via MTree and collection
replication only. Directory replication is not supported.
MTree and collection replication replicate the locked or unlocked state of files. Files that are
compliance retention locked on the source are compliance retention locked on the destination and
have the same level of protection. Minimum and maximum retention periods configured on MTrees
are replicated to the destination system.
To perform collection replication, the same security officer user must be present on both the
source and destination systems before starting replication to the destination system and afterward
for the lifetime of the source/replica pair.
Replication Resync
The replication resync destination command can be used with MTree replication, but not
with collection replication.
l If the destination MTree contains retention-locked files that do not exist on the source, then
resync will fail.
l Both source and destination MTrees must be enabled for DD Retention Lock Compliance, or
resync will fail.
Replication procedures
The topics in this section describe MTree and collection replication procedures supported for DD
Retention Lock Compliance.
Note: For full descriptions of the commands referenced in the following topics, see the Data
Domain Operating System Command Reference Guide.
2. Add the DD Retention Lock Compliance license on the system, if it is not present.
a. First, check whether the license is already installed.
license show
b. If the RETENTION-LOCK-COMPLIANCE feature is not displayed, install the license.
license add license-key
Note: License keys are case-insensitive. Include the hyphens when typing keys.
3. Set up one or more security officer users accounts according to Role-Base Access Control
(RBAC) rules.
a. In the system administrator role, add a security officer account.
user add user role security
b. Enable the security officer authorization.
authorization policy set security-officer enabled
4. Configure and enable the system to use DD Retention Lock Compliance.
Note: Enabling DD Retention Lock Compliance enforces many restrictions on low-level
access to system functions used during troubleshooting. Once enabled, the only way to
disable DD Retention Lock Compliance is to initialize and reload the system, which
results in destroying all data on the system.
b. After the restart process is complete, enable DD Retention Lock Compliance on the
system.
system retention-lock compliance enable
5. Create a replication context.
replication add source mtree://source-system-name/data/col1/mtree-
name destination mtree://destination-system-name/data/col1/mtree-
name
6. Perform the following steps on the source system only.
7. Create a replication context.
replication add source mtree://source-system-name/data/col1/mtree-
name destination mtree://destination-system-name/data/col1/mtree-
name
8. Initialize the replication context.
replication initialize mtree://destination-system-name/data/col1/
mtree-name
9. Confirm that replication is complete.
replication status mtree://destination-system-name/data/col1/mtree-
name detailed
This command reports 0 pre-compressed bytes remaining when replication is finished.
3. Set up one or more security officer users accounts according to Role-Base Access Control
(RBAC) rules.
a. In the system administrator role, add a security officer account.
user add user role security
b. Enable the security officer authorization.
authorization policy set security-officer enabled
4. Configure and enable the system to use DD Retention Lock Compliance.
Note: Enabling DD Retention Lock Compliance enforces many restrictions on low-level
access to system functions used during troubleshooting. Once enabled, the only way to
disable DD Retention Lock Compliance is to initialize and reload the system, which
results in destroying all data on the system.
b. After the restart process is complete, enable DD Retention Lock Compliance on the
system.
system retention-lock compliance enable
5. Create a replication context.
replication add source mtree://source-system-name/data/col1/mtree-
name destination mtree://destination-system-name/data/col1/mtree-
name
6. Perform the following steps on the source system only.
7. Create a replication context for each destination system.
replication add source mtree://source-system-name/data/col1/mtree-
name destination mtree://destination-system-name/data/col1/mtree-
name
d. Click Add.
5. Break the current MTree context on the replication pair.
replication break mtree://destination-system-name/data/col1/mtree-
name
6. Create the new replication context.
replication add source mtree://source-system-name/data/col1/mtree-
name destination mtree://destination-system-name/data/col1/mtree-
name
7. Perform the following steps on the source system only.
8. Select an MTree for retention locking.
Click the Data Management > MTree tab, then the checkbox for the MTree you want to
use for retention locking. (You can also create an empty MTree and add files to it later.)
9. Click the MTree Summary tab to display information for the selected MTree.
10. Lock files in the compliance-enabled MTree.
11. Ensure that both source and destination (replica) MTrees are the same.
replication resync mtree://destination-system-name/data/col1/mtree-
name
d. Add a replication context for each DD Retention Lock Compliance enabled MTree.
replication add source mtree://source-system-name/data/col1/
mtree-name destination mtree://destination-system-name/data/col1/
mtree-name
Note: Source and destination MTree names must be the same.
d. Click Add.
5. Create the replication context.
replication add source col://source-system-name destination col://
destination-system-name
6. Until instructed to do differently, perform the following steps on the destination system
only.
7. Destroy the file system.
filesys destroy
8. Until instructed otherwise, perform the following steps on the destination system.
9. Configure and enable the system to use DD Retention Lock Compliance.
system retention-lock compliance configure
(The system automatically reboots by executing the system retention-lock
compliance enable command.)
10. Enable the replication context.
replication enable col://destination-system-name
Fastcopy
When the filesys fastcopy [retention-lock] source src destination dest
command is run on a system with a DD Retention Lock Compliance enabled MTree, the command
preserves the retention lock attribute during the fastcopy operation.
Note: If the destination MTree is not retention lock enabled, the retention-lock file attribute is
not preserved.
CLI usage
Considerations for a Data Domain system with DD Retention Lock Compliance.
l Commands that break compliance cannot be run. The following commands are disallowed:
n filesys archive unit del archive-unit
n filesys destroy
n mtree delete mtree-path
n mtree retention-lock reset {min-retention-period period | max-
retention-period period} mtree mtree-path
n mtree retention-lock disable mtree mtree-path
n mtree retention-lock revert
n user reset
l The following command requires security officer authorization if the license being deleted is for
DD Retention Lock Compliance:
n license del license-feature [license-feature ...] | license-code
[license-code ...]
l The following commands require security officer authorization if DD Retention Lock
Compliance is enabled on an MTree specified in the command:
n mtree retention-lock set {min-retention-period period | max-
retention-period period} mtree mtree-path
n mtree rename mtree-path new-mtree-path
l The following commands require security officer authorization if DD Retention Lock
Compliance is enabled on the system:
Note: These commands must be run in interactive mode.
System clock
DD Retention Lock Compliance implements an internal security clock to prevent malicious
tampering with the system clock.
The security clock closely monitors and records the system clock. If there is an accumulated two-
week skew within a year between the security clock and the system clock, the file system is
disabled and can be resumed only by a security officer.
Finding the System Clock Skew
You can run the DD OS command system retention-lock compliance status (security
officer authorization required) to get system and security clock information, including the last
recorded security clock value, and the accumulated system clock variance. This value is updated
every 10 minutes.
filesys enable
6. At the prompt, continue to the enabling procedure.
7. A security officer prompt appears. Complete the security officer authorization to start the
file system. The security clock will automatically be updated to the current system date.
DD encryption overview
Data encryption protects user data if the Data Domain system is stolen or if the physical storage
media is lost during transit, and it eliminates accidental exposure of a failed drive if it is replaced.
When data enters the Data Domain system using any of the supported protocols (NFS, CIFS, DD
VTL, DD Boost, and NDMP Tape Server), the stream is segmented, fingerprinted, and de-
duplicated (global compression). It is then grouped into multi-segment compression regions, locally
compressed, and encrypted before being stored to disk.
Once enabled, the Encryption at Rest feature encrypts all data entering the Data Domain system.
You cannot enable encryption at a more granular level.
CAUTION Data that has been stored before the DD Encryption feature is enabled does not
automatically get encrypted. To protect all of the data on the system, be sure to enable the
option to encrypt existing data when you configure encryption.
Additional Notes:
As of DD OS 5.5.1.0, Encryption of Data at Rest is supported for DD Extended Retention-enabled
systems with a single retention unit. As of 5.5.1.0, DD Extended Retention supports only a single
retention unit, so systems set up during, or after, 5.5.1.0 will have no problem complying with this
restriction. However, systems set up prior to 5.5.1.0 may have more than one retention unit, but
they will not work with Encryption of Data at Rest until all but one retention unit has been
removed, or data has been moved or migrated to one retention unit.
The filesys encryption apply-changes command applies any encryption configuration
changes to all data present in the file system during the next cleaning cycle. For more information
about this command, see the Data Domain Operating System Command Reference Guide.
Encryption of Data at Rest supports all of the currently supported backup applications described in
the Backup Compatibility Guides available through Online Support at http://support.emc.com.
Data Domain Replicator can be used with encryption, enabling encrypted data to be replicated
using collection, directory, MTree, or application-specific managed file replication with the various
topologies. Each replication form works uniquely with encryption and offers the same level of
security. For more information, see the section on using encryption of data at rest with replication.
Files locked using Data Domain Retention Lock can be stored, encrypted, and replicated.
The autosupport feature includes information about the state of encryption on the Data Domain
system:
l Whether or not encryption is enabled
l The Key Manager in effect and which keys are used
l The encryption algorithm that is configured
l The state of the file system
Configuring encryption
This procedure includes configuring a key manager.
If the Encryption Status on the Data Management > File System > Encryption tab shows Not
Configured, click Configure to set up encryption on the Data Domain system.
Note: The system passphrase must be set in order to enable encryption.
l Algorithm
n Select an encryption algorithm from the drop-down list or accept the default AES 256-bit
(CBC).
The AES 256-bit Galois/Counter Mode (GCM) is the most secure algorithm but it is
significantly slower than the Cipher Block Chaining (CBC) mode.
n Determine what data is to be encrypted: existing and new or only new. Existing data will be
encrypted during the first cleaning cycle after the file system is restarted. Encryption of
existing data can take longer than a standard file system cleaning operation.
l Key Manager (select one of the three)
n Embedded Key Manager
By default, the Data Domain Embedded Key Manager is in effect after you restart the file
system unless you configure the RSA DPM Key Manager.
You can enable or disable key rotation. If enabled, type a rotation interval between 1-12
months.
n RSA DPM Key Manager
n SafeNet KeySecure Key Manager
Note: See the section about key management for an explanation about how the Embedded
Key Manager, the RSA DPM Key Manager, and SafeNet KeySecure Key Manager work.
The Summary shows the selected configuration values. Review them for correctness. To change a
value, click Back to browse to the page where it was entered and modify it.
A system restart is necessary to enable encryption. To apply the new configuration, select the
option to restart the file system.
Note: Applications may experience an interruption while the file system is restarted.
key class. The Embedded Key Manager key rotation is managed on the Data Domain system. The
Key Manager key rotation is managed on the external Key Manager server.
KeySecure
KeySecure 8.5 and 8.9 supported, which is a KMIP compliant key manager product from Safenet
Inc/Gemalto Keysecure. To be able to use KMIP key manager, users have to configure both the
key manager and the Data Domain system/DD VE, to trust each other. Users have to pre-create
keys on the key manager. A Data Domain system will retrieve these keys and their states from
KeySecure after establishing a secure TLS connection. See the Data Domain Operating System and
Gemalto KeySecure Integration Guide for more information on how to create keys and use them on a
Data Domain system.
Expired keys become read only for the existing data on the Data Domain system, and a new active
key is applied to all new data that is ingested. When a key is compromised, the existing data is re-
encrypted using the new encryption key after a file system cleaning is run. If the maximum number
of keys is reached, unused keys must be deleted to make room for new keys.
To view information about the encryption keys that are on Data Domain system, open the DD
System Manager and go to the Data Management > File System > Encryption tab. Keys are
listed by ID number in the Encryption Keys section of the Encryption tab. The following
information is given for each key: when a key was created, how long it is valid, its type (RSA DPM
or Data Domain), its state (see DPM Encryption Key States Supported by Data Domain), and its
post-compression size. If the system is licensed for Extended Retention, the following fields are
also displayed:
Active Size (post comp)
The amount of physical space on the active tier encrypted with the key.
Click on a Key MUID and the system displays the following information for the key in the Key
Details dialog: Tier/Unit (example: Active, Retention-unit-2), creation date, valid until date, state
(see DPM Encryption Key States Supported by Data Domain), and post compression size. Click
Close to close the dialog.
State Definition
Compromised The key can only decrypt. After all of the data
encrypted with the compromised key is re-
encrypted, the state changes to Destroyed
Compromised. The keys are re-encrypted
when a file system cleaning is run. You can
delete a Destroyed Compromised key, if
necessary.
Table 201 DPM encryption key states supported by Data Domain (continued)
State Definition
Procedure
1. Using the DD System Manager, select the Data Domain system you are working with in the
Navigation panel.
Note: Always perform DD System Manager functions on the system you have selected
in the Navigation panel.
Deleting a key
You can delete Key Manager keys that are in the Destroyed or Compromised-Destroyed states.
However, you only need to delete a key when the number of keys has reached the maximum 254
limit. This procedure requires security officer credentials.
About this task
Note: To reach the Destroyed state, the Destroying a Key procedure (for either the Embedded
Key Manager or the RSA DPM Key Manager) must be performed on the key and a system
cleaning must be run.
Procedure
1. Select Data Management > File System > Encryption.
2. In the Encryption Keys section, select the key or keys in the list to be deleted.
3. Click Delete....
The system displays the key to be deleted, and the tier and state for the key.
4. Type your security officer user name and password.
5. Confirm that you want to delete the key or keys by clicking Delete.
Using DD System Manager to set up and manage the KeySecure Key Manager
This section describes how to use Data Domain System Manager (DD SM) to manage the
KeySecure Key Manager.
Procedure
1. Select Data Management > File System > DD Encryption.
2. In the Key Management section, click Configure. The Change Key Manager dialog box
opens.
3. Enter your security officer user name and password.
4. Select KeySecure Key Manager from the Key Manager Type drop down menu. The
Change Key Manager information appears.
5. Set the key rotation policy:
Note: The rotation policy is specified in weeks and months. The minimum key rotation
policy increment is one week, and the maximum key rotation policy increment is 52
weeks (or 12 months).
a. Enable the Key Rotation policy. Set the Enable Key rotation policy button to enable.
b. Enter the appropriate dates in the Key rotation schedule field.
c. Select the appropriate number of weeks or months from the Weeks or Months drop
down menu.
d. Click OK.
e. Click Restart the filesystem now if you want to restart the file system to make the
changes take effect immediately. per Fig 3
Results
The key rotation policy is set or changed.
Using the Data Domain CLI to manage the KeySecure Key Manager
This section describes how to use the CLI to manage the KeySecure Key Manager.
Results
A new active key is created.
For example:
Results
The state of an existing key is modified.
2. Set a key rotation policy for the first time. In our example, we will set the rotation policy to
three weeks:
For example:
3. Subsequently, run this command if you choose to change the existing key rotation policy. In
our example, we will change the rotation policy from three weeks to four months:
Note: Log into the Data Domain system using the security role (where Username is sec,
and the password is the <security officer password> ).
For example:
4. Display the current key rotation policy, or verify that the policy is set correctly:
Status: Online
Key-class: <key-class>
KMIP-user: <KMIP username>
Key rotation period: 2 months
Last key rotation date: 03:14:17 03/19 2018
Next key rotation date: 01:01:00 05/17 2018
Results
The key rotation policy is set or changed.
Note: Multiple Data Domain systems can share the same key class. For more information
about key classes, see the section about RSA DPM key classes.
3. Create an identity using the Data Domain system’s host certificate as its identity certificate.
The identity and the key class have to be in the same identity group.
4. Import the certificates. See the section about importing certificates for more information.
3. Import the CA certificate, for example, ca.pem, from your desktop to DD1 via SSH by
entering:
# ssh sysadmin@DD1 adminaccess certificate import ca < C:\ca.pem
Note: By default, the fips-mode is enabled. If the PKCS #12 client credential is not
encrypted with the FIPS 140-2 approved algorithm, such as RC2, then you must disable
fips-mode. See the Data Domain Operating System Command Reference Guide for
information about disabling fips-mode.
3. Log into the DD System Manager and select the Data Domain system you are working with
in the Navigation panel.
Note: Always perform DD System Manager functions on the system you have selected
in the Navigation panel.
4. Click the Data Management > File System > Encryption tab.
5. Follow the instructions in the section regarding configuring encryption and select the DPM
Key Manager. If encryption has already been set up, follow the instructions in the section
regarding changing key managers after setup.
13. Keys should be automatically retrieved from the keysecure key-manager should be seen in
the local key table.
Sample output of local key table for filesys encryption keys show:
The current active key is used to encrypt any data being ingested.
14. Sync the key states.
a. On the keysecure web interface, create a new active key as previously described.
b. On the keysecure web interface, deactivate the old key by clicking the key and going
under the Life Cycle tab. Click Edit State. Set the Cryptographic State to
Deactivated. Click Save.
15. On the Data Domain system, sync the local key table by running the filesys
encryption keys sync command.
Sample output of local key table forfilesys encryption keys show:
Note: Keys can be marked as versioned keys. When 2nd and 3rd versions of a specific
key are generated, KMIP queries currently don't pick up these keys and may be an issue
if that key is being used by a Data Domain system or DD VE.
Deleting certificates
Select a certificate with the correct fingerprint.
Procedure
1. Select a certificate to delete.
2. Click Delete.
The system displays a Delete Certificate dialog with the fingerprint of the certificate to be
deleted.
3. Click OK.
3. In the Security Officer Credentials area, enter the user name and password of a security
officer.
4. Select one of the following:
l Select Apply to existing data and click OK. Decryption of existing data will occur during
the first cleaning cycle after the file system is restarted.
l Select Restart the file system now and click OK. DD Encryption will be disabled after
the file system is restarted.
2. Disable the file system by clicking Disabled in the File System status area.
3. Use the procedure to lock or unlock the file system.
3. Click OK.
This procedure re-encrypts the encryption keys with the new passphrase. This process
destroys the cached copy of the current passphrase (both in-memory and on-disk).
Note: Changing the passphrase requires two-user authentication to protect against the
possibility of a rogue employee‘s shredding the data.
CAUTION Be sure to take care of the passphrase. If the passphrase is lost, you will
never be able to unlock the file system and access the data. The data will be irrevocably
lost.
3. Select an encryption algorithm from the drop-down list or accept the default AES 256-bit
(CBC).
The AES 256-bit Galois/Counter Mode (GCM) is the most secure algorithm but it is
significantly slower than the Cipher Block Chaining (CBC) mode.
Note: To reset the algorithm to the default AES 256-bit (CBC), click Reset to default.
Note: Encryption of existing data can take longer than a standard file system clean
operation.
l To encrypt only new data, select Restart file system now and click OK.