NetBackup10 AdminGuide Hadoop
NetBackup10 AdminGuide Hadoop
Administrator's Guide
Release 10.0
NetBackup™ for Hadoop Administrator's Guide
Last updated: 2022-02-27
Legal Notice
Copyright © 2022 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas
Technologies LLC or its affiliates in the U.S. and other countries. Other names may be
trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Index .................................................................................................................... 66
Chapter 1
Introduction
This chapter includes the following topics:
■ Limitations
Hadoop plug-in Is
Hadoop cluster deployed on all the backup
hosts BigData policy
NameNode
Application_Type=hadoop
Backup Host 1
DataNode 1
Master server
DataNode 2
Backup Host 2
Media server
DataNode 3
Storage
Backup Host 3
DataNode n Parallel Streams
...
Note: All the directories specified in Nutanix AHV backup selection must be
snapshot-enabled before the backup.
3 Discovery of 4 Workload
Backup job
workload for backup discovery file 1
is triggered.
2
6 Child
DataNode 2 job 2
Backup Host 2
DataNode 3 3
6 Child
job 3
Backup Host 3 Storage
DataNode n 7 Data is backed up in = Workload
parallel streams n distribution files
Hadoop Cluster
(Snapshot Enabled)
3. During discovery, the first backup host connects with the NameNode and
performs a discovery to get details of data that needs to be backed up.
4. A workload discovery file is created on the backup host. The workload discovery
file contains the details of the data that needs to be backed up from the different
DataNodes.
5. The backup host uses the workload discovery file and decides how the workload
is distributed amongst the backup hosts. Workload distribution files are created
for each backup host.
6. Individual child jobs are executed for each backup host. As specified in the
workload distribution files, data is backed up.
7. Data blocks are streamed simultaneously from different DataNodes to multiple
backup hosts.
The compound backup job is not completed until all the child jobs are completed.
After the child jobs are completed, NetBackup cleans all the snapshots from the
NameNode. Only after the cleanup activity is completed, the compound backup job
is completed.
See “About backing up a Nutanix AHV cluster” on page 45.
2
Backup host connects
with NameNode
NameNode
1 Master server
Restore job
is triggered
DataNode 1
Backup host
DataNode 2 4
Objects are restored on Storage
Hadoop Cluster the associated datanodes
(Snapshot Enabled) 3
Restore
Starts
Terminology Definition
Compound job A backup job for Nutanix AHV data is a compound job.
■ The backup job runs a discovery job for getting information of the
data to be backed up.
■ Child jobs are created for each backup host that performs the
actual data transfer.
■ After the backup is complete, the job cleans up the snapshots on
the NameNode and is then marked complete.
Discovery job When a backup job is executed, first a discovery job is created. The
discovery job communicates with the NameNode and gathers
information of the block that needs to be backed up and the associated
DataNodes. At the end of the discovery, the job populates a workload
discovery file that NetBackup then uses to distribute the workload
amongst the backup hosts.
Child job For backup, a separate child job is created for each backup host to
transfer data to the storage media. A child job can transfer data blocks
from multiple DataNodes.
Workload discovery During discovery, when the backup host communicates with the
file NameNode, a workload discovery file is created. The file contains
information about the data blocks to be backed up and the associated
DataNodes.
Parallel streams The NetBackup parallel streaming framework allows data blocks from
multiple DataNodes to be backed up using multiple backup hosts
simultaneously.
Backup host The backup host acts as a proxy client. All the backup and restore
operations are executed through the backup host.
Terminology Definition
Fail-over NameNode In a high-availability scenario, the NameNodes other than the primary
NameNode that are updated in the hadoop.conf file are referred
as fail-over NameNodes.
Terminology Definition
DataNode DataNode is responsible for storing the actual data in Nutanix AHV.
Limitations
Review the following limitations before you deploy the Nutanix AHV plug-in:
■ Only RHEL and SUSE platforms are supported for Nutanix AHV clusters and
backup hosts.
Introduction 14
Limitations
Task Reference
Pre-requisites and See “Pre-requisites for the Nutanix AHV plug-in” on page 16.
requirements
Preparing the See “Preparing the Nutanix AHV cluster” on page 16.
Nutanix AHV
cluster
Verifying the pre-requisites and best practices for the Nutanix AHV plug-in for NetBackup 16
Pre-requisites for the Nutanix AHV plug-in
Task Reference
Best practices See “Best practices for deploying the Nutanix AHV plug-in” on page 17.
Verifying the
deployment
Configuring See “About configuring NetBackup for Nutanix AHV” on page 19.
■ For a Hadoop cluster that uses CRL, ensure that the CRL is valid and not
expired.
Chapter 3
Configuring NetBackup for
Hadoop
This chapter includes the following topics:
■ Configuring the Nutanix AHV plug-in using the Nutanix AHV configuration file
Task Reference
Task Reference
Adding Nutanix See “Adding Nutanix AHV credentials in NetBackup” on page 24.
AHV credentials in
NetBackup
Configuring the See “Configuring the Nutanix AHV plug-in using the Nutanix AHV
Nutanix AHV configuration file” on page 25.
plug-in using the
See “Configuring NetBackup for a highly-available Nutanix AHV cluster”
Nutanix AHV
on page 26.
configuration file
See “Configuring number of threads for backup hosts” on page 30.
Configuring the See “Configuration for a Nutanix AHV cluster that uses Kerberos”
backup hosts for on page 37.
Nutanix AHV
clusters that use
Kerberos
Configuring See “Configuring NetBackup policies for Hadoop plug-in” on page 38.
NetBackup policies
for Nutanix AHV
plug-in
You can add a backup host while configuring BigData policy using either the
NetBackup Administration Console or Command Line Interface.
For more information on how to create a policy, see See “Creating a BigData backup
policy” on page 39.
Configuring NetBackup for Hadoop 22
Managing backup hosts
Alternatively, you can also add a backup host using the following command:
For Windows:
<Install_Path>\NetBackup\bin\admincmd\bpplinclude PolicyName -add
"Backup_Host=IP_address or hostname"
For UNIX:
/usr/openv/var/global/bin/admincmd/bpplinclude PolicyName -add
"Backup_Host=IP_address or hostname"
For more information, See “Using NetBackup Command Line Interface (CLI)
to create a BigData policy for Hadoop clusters ” on page 41.
2 As a best practice, add the entries of all the NameNodes and DataNodes to
the/etc/hosts file on all the backup hosts. You must add the host name in
FQDN format.
OR
Add the appropriate DNS entries in the /etc/resolv.conf file.
Configuring NetBackup for Hadoop 23
Managing backup hosts
For UNIX:
/usr/openv/var/global/bin/admincmd/bpplinclude PolicyName -delete
'Backup_Host=IP_address or hostname'
■ For Windows
The directory path to the command:
<Install_Path>\NetBackup\bin\admincmd\bpsetconfig
bpsetconfig -h masterserver
bpsetconfig> APP_PROXY_SERVER = clientname1.domain.org
bpsetconfig> APP_PROXY_SERVER = clientname2.domain.org
bpsetconfig>
Windows systems: <ctl-Z>
Configuring NetBackup for Hadoop 24
Adding Nutanix AHV credentials in NetBackup
■ Hostname and port of the NameNode must be same as you have specified with
the http address parameter in the core-site.xml of the Nutanix AHV cluster.
■ For password, provide any random value. For example, Hadoop.
To add Hadoop credentials in NetBackup
1 Run tpconfig command from the following directory paths:
On UNIX systems, /usr/openv/volmgr/bin/
On Windows systems, install_path\Volmgr\bin\
2 Run the tpconfig --help command. A list of options which are required to
add, update, and delete Hadoop credentials is displayed.
3 Run the tpconfig -add -application_server application_server_name
-application_server_user_id user_ID -application_type
application_type -requiredport IP_port_number [-password password
[-key encryption_key]] command by providing appropriate values for each
parameter to add Hadoop credentials.
For example, if you want to add credentials for Hadoop server which has
application_server_name as hadoop1, then run the following command using
the appropriate <user_ID> and <password> details.
tpconfig -add -application_server hadoop1 -application_type hadoop
-application_server_user_id Hadoop -requiredport 50070 -password
Hadoop
Note: You must not provide a blank value for any of the parameters, or the backup
job fails.
Ensure that you configure all the required parameters to run the backup and restore
operations successfully.
Note: For non-HA environment, the fail-over parameters are not required.
{
"application_servers":
{
"hostname_of_the_primary_namenode":
{
"failover_namenodes":
[
{
"hostname":"hostname_of_failover_namenode",
"port":port_of_the_failover_namenode
}
],
"port":port_of_the_primary_namenode
}
},
"number_of_threads":number_of_threads
}
■ Specify one of the NameNodes (primary) as the client in the BigData policy.
■ Specify the same NameNode (primary and fail-over) as application server when
you execute the tpconfig command.
■ Create a hadoop.conf file, update it with the details of the NameNodes (primary
and fail-over), and copy it to all the backup hosts. The hadoop.conf file is in
JSON format.
■ Hostname and port of the NameNode must be same as you have specified with
the http address parameter in the core-site.xml of the Nutanix AHV cluster.
■ User name of the primary and fail-over NameNode must be same.
■ Do not provide a blank value for any of the parameters, or the backup job fails.
Configuring NetBackup for Hadoop 28
Configuring the Nutanix AHV plug-in using the Nutanix AHV configuration file
{
"application_servers":
{
"hostname_of_primary_namenode1":
{
"failover_namenodes":
[
{
"hostname": "hostname_of_failover_namenode1",
"port": port_of_failover_namenode1
}
],
"port":port_of_primary_namenode1
}
}
}
Configuring NetBackup for Hadoop 29
Configuring the Nutanix AHV plug-in using the Nutanix AHV configuration file
2 If you have multiple Nutanix AHV clusters, use the same hadoop.conf file to
update the details. For example,
{
"application_servers":
{
"hostname_of_primary_namenode1":
{
"failover_namenodes":
[
{
"hostname": "hostname_of_failover_namenode1",
"port": port_of_failover_namenode1
}
],
"port"::port_of_primary_namenode1
},
"hostname_of_primary_namenode2":
{
"failover_namenodes":
[
{
"hostname": "hostname_of_failover_namenode2",
"port": port_of_failover_namenode2
}
],
"port":port_of_primary_namenode2
}
}
}
3 Copy this file to the following location on all the backup hosts:
/usr/openv/var/global/
{
"application_servers": {
"hostname_of_namenode1":{
"port":port_of_namenode1
}
}
2 Copy this file to the following location on all the backup hosts:
/usr/openv/var/global/
{
"number_of_threads": number_of_threads
}
{
"application_servers":
{
"hostname_of_namenode1":
{
"use_ssl":true
}
}
}
{
"application_servers":
{
"primary.host.com":
{
"use_ssl":true,
"failover_namenodes":
[
{
"hostname":"secondary.host.com",
"use_ssl":true,
"port":11111
}
]
}
}
}
ECA_TRUST_STORE_PATH Specifies the file path to the certificate bundle file that contains
all trusted root CA certificates.
If you have not configured the option, add all the required
Hadoop server CA certificates to the trust store and set the
option.
If you have not configured the option, add all the required
CRLs to the CRL cache and then set the option.
Set this value to YES when you have set the use_ssl as
true in the hadoop.conf file. The single value is applicable
to all Hadoop clusters when use_ssl is set to true.
HADOOP_CRL_CHECK Lets you validate the revocation status of the Hadoop server
certificate against the CRLs.
Usage Description
Note: For validating the revocation status of a virtualization server certificate, the
VIRTUALIZATION_CRL_CHECK option is used.
Usage Description
Usage Description
Usage Description
How to use Use the nbgetconfig and the nbsetconfig commands to view,
add, or change the option.
HADOOP_SECURE_CONNECT_ENABLED = YES
Usage Description
Usage Description
ECA_TRUST_STORE_PATH=/tmp/cacert.pem
ECA_CRL_PATH=/tmp/backuphostdirectory
HADOOP_SECURE_CONNECT_ENABLED=YES/NO
HADOOP_CRL_CHECK=DISABLE / LEAF / CHAIN
■ Acquire the keytab file and copy it to a secure location on the backup host.
■ Ensure that the keytab has the required principal.
■ Manually update the krb5.conf file with the appropriate KDC server and realm
details.
Note: Hostname and port of the NameNode must be same as you have specified
with the http address parameter in the core-site.xml of the Nutanix AHV cluster.
For more information on how to create a BigData policy, See “Creating a BigData
backup policy” on page 39.
Configuring NetBackup for Hadoop 39
Configuring NetBackup policies for Hadoop plug-in
Note: The directory or folder specified for backup selection while defining
BigData Policy with Application_Type=hadoop must not contain space or
comma in their names.
4 View the details about the new policy using the -L option.
bpplinfo policyname -L
For UNIX:
bpplinclude PolicyName -add 'Application_Type=hadoop'
7 Specify the backup host on which you want the backup operations to be
performed for Hadoop.
For Windows:
bpplinclude PolicyName -add "Backup_Host=IP_address or hostname"
For UNIX:
bpplinclude PolicyName -add 'Backup_Host=IP_address or hostname'
Note: The backup host must be a Linux computer. The backup host can be a
NetBackup client or a media server or a master server.
8 Specify the Hadoop directory or folder name that you want to backup.
For Windows:
bpplinclude PolicyName -add "/hdfsfoldername"
For UNIX:
bpplinclude PolicyName -add '/hdfsfoldername'
Note: Directory or folder used for backup selection while defining BigData
Policy with Application_Type=hadoop must not contain space or comma in
their names.
9 Modify and update the policy storage type for BigData policy.
bpplinfo PolicyName -residence STUName -modify
10 Specify the IP address or the host name of the NameNode for adding the client
details.
For Windows:
bpplclients PolicyName -M "MasterServerName" -add
"HadoopServerNameNode" "Linux" "RedHat"
For UNIX:
bpplclients PolicyName -M 'MasterServerName' -add
'HadoopServerNameNode' 'Linux' 'RedHat'
Configuring NetBackup for Hadoop 43
Disaster recovery of a Nutanix AHV cluster
11 Assign a schedule for the created BigData policy as per your requirements.
bpplsched PolicyName -add Schedule_Name -cal 0 -rl 0 -st
sched_type -window 0 0
Task Description
After the Nutanix AHV cluster and nodes are Perform the following tasks:
up, prepare the cluster for operations with
Update firewall settings so that the backup
NetBackup.
hosts can communicate with the Nutanix AHV
cluster.
The backup hosts use the hadoop.conf file With this release, the following plug-in
to save the configuration settings of the settings can be configured
Nutanix AHV plug-in. You need to create
■ See “Configuring NetBackup for a
separate file for each backup host and copy
highly-available Nutanix AHV cluster”
it to /usr/openv/var/global/. You need
on page 26.
to create the hadoop.conf file in JSON
■ See “Configuring number of threads for
format.
backup hosts” on page 30.
Update the BigData policy with the original See “Configuring NetBackup policies for
NameNode name. Hadoop plug-in” on page 38.
Chapter 4
Performing backups and
restores of Hadoop
This chapter includes the following topics:
Task Reference
(Optional) See “Pre-requisite for running backup and restore operations for a Nutanix
Complete the AHV cluster with Kerberos authentication” on page 46.
pre-requisite for
Kerberos
Best practices See “Best practices for backing up a Nutanix AHV cluster” on page 46.
Performing backups and restores of Hadoop 46
About backing up a Nutanix AHV cluster
Task Reference
Troubleshooting For discovery and cleanup related logs, review the following log file on
tips the first backup host that triggered the discovery.
/usr/openv/var/global/logs/nbaapidiscv
For data transfer related logs, search for corresponding backup host
(using the hostname) in the log files on the master server.
See “Troubleshooting backup issues for Nutanix AHV data” on page 57.
Note: During the backup and restore operations, the TGT must be valid. Thus,
specify the TGT validity accordingly or renew it when required during the operation.
For example,
kinit -k -t /usr/openv/var/global/nbusers/hdfs_mykeytabfile.keytab
[email protected]
■ Ensure that the local time on the HDFS nodes and the backup host are
synchronized with the NTP server.
■ Ensure that you have valid certificates for a Hadoop cluster that is enabled with
SSL (HTTPS).
Task Reference
Complete the See “Pre-requisite for running backup and restore operations for a
pre-requisites for Nutanix AHV cluster with Kerberos authentication” on page 46.
Kerberos
Restoring Nutanix ■ See “Using the Restore Wizard to restore Hadoop data on the same
AHV data on the Hadoop cluster” on page 49.
same NameNode ■ See “Using the bprestore command to restore Hadoop data on the
or Nutanix AHV same Hadoop cluster” on page 50.
cluster
Restoring Nutanix See “Restoring Hadoop data on an alternate Hadoop cluster” on page 52.
AHV data to an
alternate
NameNode or
Nutanix AHV
cluster
Best practices See “Best practices for restoring a Hadoop cluster” on page 48.
Troubleshooting See “Troubleshooting restore issues for Nutanix AHV data” on page 62.
tips
■ Ensure that correct parameters are added in the hadoop.conf file for HTTP or
HTTPS based clusters.
■ Ensure that the backup host contains a valid CRL that is not expired.
■ Application-level or file system-level encryption is not supported for Hadoop.
You must be a Hadoop superuser to ensure that restore works correctly.
From the Destination client for restores list, select the required backup
host.
■ On the Specify NetBackup Machines and Policy Type wizard, enter the
policy type details for restore.
From the Policy type for restores list, choose BigData as the policy type
for restore.
Click Ok.
6 Go to the Backup History and select the backup images that you want to
restore.
7 In the Directory Structure pane, expand the Directory.
All the subsequent files and folders under the directory are displayed in the
Contents of Selected Directory pane.
8 In the Contents of Selected Directory pane, select the check box for the
Hadoop files that you want to restore.
9 Click Restore.
10 In the Restore Marked Files dialog box, select the destination for restore as
per your requirement.
■ Select Restore everything to its original location if you want to restore
your files to the same location where you performed your backup.
■ Select Restore everything to a different location if you want to restore
your files to a location which is not the same as your backup location.
Where,
-S master_server
Specifies a file (listfile) that contains a list of files to be restored and can be
used instead of the file names option. In listfile, list each file path must be on
a separate line.
-L progress_log
Specifies the name of allowlisted file path in which to write progress information.
-t 44
Where,
-S master_server
Specifies a file (listfile) that contains a list of files to be restored and can be
used instead of the file names option. In listfile, list each file path must be on
a separate line.
-L progress_log
Specifies the name of allowlisted file path in which to write progress information.
-t 44
Specifies the name of a file with name changes for alternate-path restores.
Change the /<source_folder_path> to /<destination_folder_path>
Note: NetBackup supports redirected restores only using the Command Line
Interface (CLI).
Performing backups and restores of Hadoop 53
About restoring a Nutanix AHV cluster
Note: Make sure that you have added the credentials for the alternate NameNode
or Hadoop cluster in NetBackup master server and also completed the allowlisting
tasks on NetBackup master server. For more information about how to add Hadoop
credentials in NetBackup and whitlelisting procedures, See “Adding Nutanix AHV
credentials in NetBackup” on page 24. See “Including a NetBackup client on
NetBackup master server allowed list” on page 23.
Performing backups and restores of Hadoop 54
About restoring a Nutanix AHV cluster
Parameter Value
Specifies a file (listfile) that contains a list of files to be restored and can be
used instead of the file names option. In listfile, list each file path must be on
a separate line.
-L progress_log
Specifies the name of allowlisted file path in which to write progress information.
-t 44
Specifies the name of a file with name changes for alternate-path restores.
Use the following form for entries in the rename file:
change backup_filepath to restore_filepath
ALT_APPLICATION_SERVER=<Application Server Name>
Note: Ensure that you have allowlisted all the file paths such as
<rename_file_path>, <progress_log_path> that are already not included as
a part of NetBackup install path.
Chapter 5
Troubleshooting
This chapter includes the following topics:
Area References
General logging See “About NetBackup for Hadoop debug logging” on page 57.
and debugging
Backup issues See “Troubleshooting backup issues for Nutanix AHV data” on page 57.
Restore issues See “Troubleshooting restore issues for Nutanix AHV data” on page 62.
To avoid issues See “Best practices for deploying the Nutanix AHV plug-in” on page 17.
also review the
See “Best practices for backing up a Nutanix AHV cluster” on page 46.
best practices
See “Best practices for restoring a Hadoop cluster” on page 48.
Troubleshooting 57
About NetBackup for Hadoop debug logging
Extended attributes (xattrs) and Access Control Lists (ACLs) are not
backed up or restored for Hadoop
Extended attributes allow user applications to associate additional metadata with
a file or directory in Hadoop. By default, this is enabled on Hadoop Distributed File
System (HDFS).
Access Control Lists provide a way to set different permissions for specific named
users or named groups, in addition to the standard permissions. By default, this is
disabled on HDFS.
Hadoop plug-ins do not capture extended attributes or Access Control Lists (ACLs)
of an object during backup and hence these are not set on the restored files or
folders.
Workaround:
If the extended attributes are set on any of the files or directories that is backed up
using the BigData policy with Application_Type = hadoop, then, you have to
explicitly set the extended attributes on the restored data.
Extended attributes can be set using the Hadoop shell commands such as fs
-getfattr and hadoop fs -setfattr.
If the Access Control Lists (ACLs) are enabled and set on any of the files or
directories that is backed up using the BigData policy with Application_Type =
hadoop, then, you have to explicitly set the ACLs on the restored data.
ACLs can be set using the Hadoop shell commands such as hadoop fs -getfacl
and hadoop fs -setfacl.
Troubleshooting 60
Troubleshooting backup issues for Nutanix AHV data
Verify that the backup host has valid Ticket Granting Ticket (TGT) in case of
Kerberos enabled Nutanix AHV cluster.
Workaround:
Renew the TGT.
Troubleshooting 61
Troubleshooting backup issues for Nutanix AHV data
Workaround:
Verify the hadoop.conf file to ensure that blank values or incorrect syntax is not
used with the parameter values.
/data/1
/data/2
Workaround
To view the available data that can be restored from an incremental backup image,
select the related full backup images along with the incremental backup images.
Extended attributes (xattrs) and Access Control Lists (ACLs) are not
backed up or restored for Hadoop
For more information about this issue, See “Extended attributes (xattrs) and Access
Control Lists (ACLs) are not backed up or restored for Hadoop” on page 59.
Restore operation fails when Hadoop plug-in files are missing on the
backup host
When a restore job is triggered on a backup host which does not have Hadoop
plug-in files installed, the restore operation fails with the following error:
{
"application_servers":
{
"primary.host.com":
{
"use_ssl":true
"failover_namenodes":
[
{
"hostname":"secondary.host.com",
"use_ssl":true
"port":11111
}
],
"port":11111
Troubleshooting 65
Troubleshooting restore issues for Nutanix AHV data
}
},
"number_of_threads":5
}
Index
A Limitations 13
Adding
backup host 20 N
allowlisting NetBackup
backuphost 23 debug logging 57
server and client requirements 16
B NetBackup Appliance
Backup 47 backup host 24
Hadoop 45
backup 9 O
BigData policy overview
Command Line Interface 41 backup 7
NetBackup Administration Console 39 configuration 7
Policies utility 40 deployment 7
Policy Configuration Wizard 39 installation 7
restore 7
C
compatibility P
supported operating system 16 parallel streaming framework 7
Creating policies
BigData backup policy 39 configuring 38
Preparing
D Hadoop 16
disaster recovery 43
R
H Removing
Hadoop credentials backup host 20
adding 24 Restore
bprestore command 50
Hadoop 47
K restore 10
Kerberos Restoring
post installation 37 alternate NameNode 52
kerberos Hadoop cluster 49
backup 46
restore 46
T
terms 11
L Troubleshoot
License backup 57
Hadoop 16
Index 67
troubleshooting
restore 63