In 100 BigDataManagementSecurityGuide en
In 100 BigDataManagementSecurityGuide en
0)
This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://
www.stlport.org/doc/ license.html, http://asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://
httpunit.sourceforge.net/doc/ license.html, http://jung.sourceforge.net/license.txt , http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/release/
license.html, http://www.libssh2.org, http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/licenseagreements/fuse-message-broker-v-5-3- license-agreement; http://antlr.org/license.html; http://aopalliance.sourceforge.net/; http://www.bouncycastle.org/licence.html;
http://www.jgraph.com/jgraphdownload.html; http://www.jcraft.com/jsch/LICENSE.txt; http://jotm.objectweb.org/bsd_license.html; . http://www.w3.org/Consortium/Legal/
2002/copyright-software-20021231; http://www.slf4j.org/license.html; http://nanoxml.sourceforge.net/orig/copyright.html; http://www.json.org/license.html; http://
forge.ow2.org/projects/javaservice/, http://www.postgresql.org/about/licence.html, http://www.sqlite.org/copyright.html, http://www.tcl.tk/software/tcltk/license.html, http://
www.jaxen.org/faq.html, http://www.jdom.org/docs/faq.html, http://www.slf4j.org/license.html; http://www.iodbc.org/dataspace/iodbc/wiki/iODBC/License; http://
www.keplerproject.org/md5/license.html; http://www.toedter.com/en/jcalendar/license.html; http://www.edankert.com/bounce/index.html; http://www.net-snmp.org/about/
license.html; http://www.openmdx.org/#FAQ; http://www.php.net/license/3_01.txt; http://srp.stanford.edu/license.txt; http://www.schneier.com/blowfish.html; http://
www.jmock.org/license.html; http://xsom.java.net; http://benalman.com/about/license/; https://github.com/CreateJS/EaselJS/blob/master/src/easeljs/display/Bitmap.js;
http://www.h2database.com/html/license.html#summary; http://jsoncpp.sourceforge.net/LICENSE; http://jdbc.postgresql.org/license.html; http://
protobuf.googlecode.com/svn/trunk/src/google/protobuf/descriptor.proto; https://github.com/rantav/hector/blob/master/LICENSE; http://web.mit.edu/Kerberos/krb5current/doc/mitK5license.html; http://jibx.sourceforge.net/jibx-license.html; https://github.com/lyokato/libgeohash/blob/master/LICENSE; https://github.com/hjiang/jsonxx/
blob/master/LICENSE; https://code.google.com/p/lz4/; https://github.com/jedisct1/libsodium/blob/master/LICENSE; http://one-jar.sourceforge.net/index.php?
page=documents&file=license; https://github.com/EsotericSoftware/kryo/blob/master/license.txt; http://www.scala-lang.org/license.html; https://github.com/tinkerpop/
blueprints/blob/master/LICENSE.txt; http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html; https://aws.amazon.com/asl/; https://github.com/
twbs/bootstrap/blob/master/LICENSE; and https://sourceforge.net/p/xmlunit/code/HEAD/tree/trunk/LICENSE.txt.
This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and Distribution
License (http://www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php), the Sun Binary Code License
Agreement Supplemental License Terms, the BSD License (http:// www.opensource.org/licenses/bsd-license.php), the new BSD License (http://opensource.org/
licenses/BSD-3-Clause), the MIT License (http://www.opensource.org/licenses/mit-license.php), the Artistic License (http://www.opensource.org/licenses/artisticlicense-1.0) and the Initial Developers Public License Version 1.0 (http://www.firebirdsql.org/en/initial-developer-s-public-license-version-1-0/).
This product includes software copyright 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit http://www.extreme.indiana.edu/.
This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.
See patents at https://www.informatica.com/legal/patents.html.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is
subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT
INFORMED OF THE POSSIBILITIES OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT
LIMITATION, BREACH OF CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
Part Number: IN-BDE-1000-000-0001
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica My Support Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Product Availability Matrixes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Informatica Web Site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica How-To Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica Support YouTube Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica Velocity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Table of Contents
Step 3. Create an SPN and Keytab File in the Active Directory Server. . . . . . . . . . . . . . . . . 24
Step 4. Specify the Kerberos Authentication Properties for the Data Integration Service. . . . . 25
Running Mappings in the Native Environment when Informatica Uses Kerberos Authentication. . . . 25
Running Mappings in the Native Environment When Informatica Does not Use Kerberos
Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Kerberos Authentication for a Hive Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Kerberos Authentication for an HBase Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Kerberos Authentication for an HDFS Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Metadata Import in the Developer Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Create and Configure the Analyst Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Table of Contents
Preface
The Big Data Management Security Guide is written for Informatica administrators. The guide contains
information that you need to manage security for Big Data Management and the connection between Big
Data Management and the Hadoop cluster. This book assumes that you are familiar with the Informatica
domain, security for the Informatica domain, and security for Hadoop clusters.
Informatica Resources
Informatica My Support Portal
As an Informatica customer, you can access the Informatica My Support Portal at
http://mysupport.informatica.com.
The site contains product information, user group information, newsletters, access to the Informatica
customer support case management system, the Informatica How-To Library, the Informatica Knowledge
Base, Informatica Product Documentation, and access to the Informatica user community.
The site contains product information, user group information, newsletters, access to the Informatica How-To
Library, the Informatica Knowledge Base, Informatica Product Documentation, and access to the Informatica
user community.
Informatica Documentation
The Informatica Documentation team makes every effort to create accurate, usable documentation. If you
have questions, comments, or ideas about this documentation, contact the Informatica Documentation team
through email at [email protected]. We will use your feedback to improve our
documentation. Let us know if we can contact you regarding your comments.
The Documentation team updates documentation as needed. To get the latest documentation for your
product, navigate to Product Documentation from http://mysupport.informatica.com.
Informatica Marketplace
The Informatica Marketplace is a forum where developers and partners can share solutions that augment,
extend, or enhance data integration implementations. By leveraging any of the hundreds of solutions
available on the Marketplace, you can improve your productivity and speed up time to implementation on
your projects. You can access Informatica Marketplace at http://www.informaticamarketplace.com.
Informatica Velocity
You can access Informatica Velocity at http://mysupport.informatica.com. Developed from the real-world
experience of hundreds of data management projects, Informatica Velocity represents the collective
knowledge of our consultants who have worked with organizations from around the world to plan, develop,
deploy, and maintain successful data management solutions. If you have questions, comments, or ideas
about Informatica Velocity, contact Informatica Professional Services at [email protected].
Preface
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at http://www.informatica.com/us/services-and-training/support-services/global-support-centers/.
Preface
CHAPTER 1
Overview, 9
Authentication, 10
Authorization, 11
Data Security, 12
Overview
You can configure security for Big Data Management and the Hadoop cluster to protect from threats inside
and outside the network. Security for Big Data Management includes security for the Informatica domain and
security for the Hadoop cluster.
Security for the Hadoop cluster includes the following areas:
Authentication
By default, Hadoop does not verify the identity of users. To use authentication, configure Kerberos for
the cluster.
Big Data Management supports Hadoop clusters that use a Microsoft Active Directory (AD) Key
Distribution Center (KDC) or an MIT KDC.
Authorization
After a user is authenticated, a user must be authorized to perform actions. Hadoop uses HDFS
permissions to determine what a user can do to a file or directory on HDFS. For example, a user must
have the correct permissions to access the directories where specific data is stored to use that data in a
mapping.
Data and metadata management
Data and metadata management involves managing data to track and audit data access, update
metadata, and perform data lineage. Big Data Management supports Cloudera Navigator and Metadata
Manager to manage metadata and perform data lineage.
Data security
Data security involves protecting sensitive data from unauthorized access. Big Data Management
supports data masking with the Data Masking transformation in the Developer tool, Dynamic Data
Masking, and Persistent Data Masking.
Security for the Informatica domain is separate from security for the Hadoop cluster. For a higher level of
security, secure the Informatica domain and the Hadoop cluster. For more information about security for the
Informatica domain, see the Informatica Security Guide.
Authentication
When the Informatica domain includes Big Data Management, users must be authenticated in the Informatica
domain and the Hadoop cluster. Authentication for the Informatica domain is separate from authentication for
the Hadoop cluster.
The authentication process verifies the identity of a user account.
The Informatica domain uses one of the following authentication protocols:
Native authentication
The Informatica domain stores user credentials and privileges in the domain configuration repository and
performs all user authentication within the Informatica domain.
Lightweight Directory Access Protocol (LDAP)
The LDAP directory service stores user accounts and credentials that are accessed over the network.
Kerberos authentication
Kerberos is a network authentication protocol which uses tickets to authenticate users and services in a
network. Users are stored in the Kerberos principal database, and tickets are issued by a KDC.
By default, Hadoop does not authenticate users. Any user can be used in the Hadoop connection. Informatica
recommends that you enable authentication for the cluster. If authentication is enabled for the cluster, the
cluster authenticates the user account used for the Hadoop connection between Big Data Management and
the cluster. For a higher level of security, you can set up Kerberos authentication for the cluster.
For more information about how to configure authentication for the Informatica domain, see the Informatica
Security Guide.
For more information about how to enable authentication for the Hadoop cluster, see the documentation for
your Hadoop distribution.
Kerberos Authentication
Big Data Management and the Hadoop cluster can use Kerberos authentication to verify user accounts. You
can use Kerberos authentication with the Informatica domain, with the Hadoop cluster, or with both.
Kerberos is a network authentication protocol which uses tickets to authenticate access to services and
nodes in a network. Kerberos uses a Key Distribution Center (KDC) to validate the identities of users and
services and to grant tickets to authenticated user and service accounts. Users and services are known as
principals. The KDC has a database of principals and their associated secret keys that are used as proof of
identity. Kerberos can use an LDAP directory service as a principal database.
The requirements for Kerberos authentication for the Informatica domain and for the Hadoop cluster:
10
Authorization
Authorization controls what a user can do on a Hadoop cluster. For example, a user must be authorized to
submit jobs to the Hadoop cluster.
Authorization for Big Data Management consists of HDFS permissions and user impersonation.
HDFS Permissions
HDFS permissions determine what a user can do to files and directories stored in HDFS. To access a file or
directory, a user must have permission or belong to a group that has permission.
HDFS permissions are similar to permissions for UNIX or Linux systems. For example, a user requires the r
permission to read a file and the w permission to write a file. When a user or application attempts to perform
an action, HDFS checks if the user has permission or belongs to a group with permission to perform that
action on a specific file or directory.
For more information about HDFS permissions, see the Apache Hadoop documentation or the documentation
for your Hadoop distribution.
Big Data Management supports HDFS permissions without additional configuration.
User Impersonation
User impersonation allows different users to run mappings in a Hadoop cluster that uses Kerberos
authentication or connect to big data sources and targets that use Kerberos authentication.
The Data Integration Service uses its credentials to impersonate the user accounts designated in the Hadoop
connection to connect to the Hadoop cluster or to start the Blaze engine.
When the Data Integration Service impersonates a user account to submit a mapping, the mapping can only
access Hadoop resources that the impersonated user has permissions on. Without user impersonation, the
Authorization
11
Data Integration Service uses its credentials to submit a mapping to the Hadoop cluster. Restricted Hadoop
resources might be accessible.
When the Data Integration service impersonates a user account to start the Blaze engine, the Blaze engine
has the privileges and permissions of the user account used to start it.
Data Security
Data security protects sensitive data on the Hadoop cluster from unauthorized access. Big Data Management
supports different methods of data masking to secure data. Data masking obscures data based on
configurable rules.
For example, an analyst in the marketing department might need to use production data to conduct analysis,
but a mapping developer can test a mapping with masked data. You can set data masking rules to allow the
analyst to access production data and rules to allow the mapping developer to access test data that is
realistic. Alternatively, an analyst may only need access to some production data and the rest of the data can
be masked. You can configure data masking rules that fit your data security requirements.
You can use the following Informatica components and products to secure data on the Hadoop cluster:
Data Masking transformation
The Data Masking transformation changes sensitive production data to realistic test data for nonproduction environments. The Data Masking transformation modifies source data based on masking
techniques that you configure for each column.
12
For more information bout how to use the Data Masking transformation in the Hadoop environment, see
the Informatica Big Data Management User Guide and the Informatica Developer Transformation Guide.
Dynamic Data Masking
When a mapping uses data from a Hadoop source, Dynamic Data Masking acts as a proxy that
intercepts requests and data between the Data Integration Service and the cluster. Based on the data
masking rules, Dynamic Data Masking might return the original values, masked values, or scrambled
values for a mapping to use. The actual data in the cluster is not changed.
For more information about Dynamic Data Masking, see the Informatica Dynamic Data Masking
Administrator Guide.
Persistent Data Masking
Persistent Data Masking allows you to mask sensitive and confidential data in non-production systems
such as development, test, and training systems.
You can perform data masking on data that is stored in a Hadoop cluster. Additionally, you can mask
data during data ingestion in the native or Hadoop environment. Masking rules can replace, scramble, or
initialize data. When you create a project, you select masking rules for each table field that you want to
mask. When you run the project, the Persistent Data Masking uses the masking rule technique to
change the data in the Hadoop cluster. The result is realistic data that you can use for development or
testing purposes that is unidentifiable.
For more information about Persistent Data Masking, see the Informatica Test Data Management User
Guide and the Informatica Test Data Management Administrator Guide.
Data Security
13
CHAPTER 2
Prerequisite Tasks for Running Mappings on a Hadoop Cluster with Kerberos Authentication, 15
Running Mappings in the Hadoop Environment when Informatica Does not Use Kerberos
Authentication, 20
Running Mappings in the Native Environment when Informatica Uses Kerberos Authentication, 25
Running Mappings in the Native Environment When Informatica Does not Use Kerberos
Authentication, 26
14
Kerberos authentication on an AD service. The Hadoop cluster uses Kerberos authentication on an MIT
service. The one way cross-realm trust enables the MIT service to communicate with the AD service.
Based on whether the Informatica domain uses Kerberos authentication or not, you might need to perform
the following tasks to run mappings on a Hadoop cluster that uses Kerberos authentication:
If you run mappings in a Hadoop environment, you can choose to configure user impersonation to enable
other users to run mappings on the Hadoop cluster. Else, the Data Integration Service user can run
mappings on the Hadoop cluster.
If you run mappings in the native environment, you must configure the mappings to read and process data
from Hive sources that use Kerberos authentication.
If you run a mapping that has Hive sources or targets, you must enable user authentication for the
mapping on the Hadoop cluster.
If you import metadata from Hive, complex file sources, and HBase sources, you must configure the
Developer tool to use Kerberos credentials to access the Hive, complex file, and HBase metadata.
HiveServer 2
15
Download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy .zip File from
the Oracle Technology Network website.
2.
Extract the contents of the JCE Policy File to the following location: <Informatica Big Data Server
Installation Directory>/java/jre/lib/security
For JCE Policy File installation instructions, see the README.txt file included in the JCE Policy .zip file.
16
hadoop.security.authentication
Authentication type. Valid values are simple, and kerberos. Simple uses no authentication.
Set this property to the value of the same property in the following file:
/<$Hadoop_Home>/conf/core-site.xml
hadoop.security.authorization
Enable authorization for different protocols. Set the value to true.
hive.server2.enable.doAs
The authentication that the server is set to use. Set the value to true. When the value is set to true, the
user who makes calls to the server can also perform Hive operations on the server.
hive.metastore.sasl.enabled
If true, the Metastore Thrift interface is secured with Simple Authentication and Security Layer (SASL)
and clients must authenticate with Kerberos. Set the value to true.
hive.metastore.kerberos.principal
The SPN for the metastore thrift server. Replaces the string _HOST with the correct host name.
Set this property to the value of the same property in the following file:
/<$Hadoop_Home>/conf/hive-site.xml
yarn.resourcemanager.principal
The SPN for the Yarn resource manager.
Set this property to the value of the same property in the following file:
/<$Hadoop_Home>/conf/yarn-site.xml
The following sample code shows the values for the security properties in hive-site.xml:
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@YOUR-REALM</value>
<description>The SPN for the NameNode.
</description>
</property>
<property>
<name>mapreduce.jobtracker.kerberos.principal</name>
<value>mapred/_HOST@YOUR-REALM</value>
<description>
The SPN for the JobTracker or Yarn resource manager.
</description>
</property>
<property>
<name>mapreduce.jobhistory.principal</name>
<value>mapred/_HOST@YOUR-REALM</value>
<description>The SPN for the MapReduce JobHistory server.
</description>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value> <!-- A value of "simple" would disable security. -->
<description>
Authentication type.
</description>
</property>
<property>
<name>hadoop.security.authorization</name>
Prerequisite Tasks for Running Mappings on a Hadoop Cluster with Kerberos Authentication
17
<value>true</value>
<description>
Enable authorization for different protocols.
</description>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
<description>The authentication that the server is set to use. When the value is set
to true, the user who makes calls to the server can also perform Hive operations on the
server.
</description>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
<description>If true, the Metastore Thrift interface will be secured with SASL.
Clients must authenticate with Kerberos.
</description>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive/_HOST@YOUR-REALM</value>
<description>The SPN for the metastore thrift server. Replaces the string _HOST with
the correct hostname.
</description>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>yarn/_HOST@YOUR-REALM</value>
<description>The SPN for the Yarn resource manager.
</description>
</property>
IBM BigInsights
If the Hadoop cluster uses IBM BigInsights, you must also configure the following property in the hivesite.xml file:
map.reduce.jobtracker.kerberos.principal
The Kerberos principal name of the JobTracker.
Use the following syntax for the value: mapred/_HOST@YOUR-REALM.
The following sample code shows the property you can configure in hive-site.xml:
<property>
<name>mapreduce.jobtracker.kerberos.principal</name>
<value>mapred/_HOST@YOUR-REALM</value>
<description>The Kerberos principal name of the JobTracker.</description>
</property>
18
krb5.conf is located in the /etc/krb5.conf directory on any node on the Hadoop cluster.
Copy krb5.conf to the <Informatica Installation Directory>/java/jre/lib/security directory.
Note: If you copy krb5.conf from the Hadoop cluster, you do not need to edit it.
1.
2.
Edit krb5.conf.
3.
4.
Value
kdc
admin_server
The following example shows the parameters for the Hadoop realm if the Informatica domain does not
use Kerberos authentication:
[realms]
HADOOP-AD-REALM = {
kdc = l23abcd134.hadoop-AD-realm.com
admin_server = 123abcd124.hadoop-AD-realm.com
}
The following example shows the parameters for the Hadoop realm if the Informatica domain uses
Kerberos authentication:
[realms]
INFA-AD-REALM = {
kdc = abc123.infa-ad-realm.com
admin_server = abc123.infa-ad-realm.com
}
HADOOP-MIT-REALM = {
kdc = def456.hadoop-mit-realm.com
admin_server = def456.hadoop-mit-realm.com
}
5.
In the domain_realms section, map the domain name or host name to a Kerberos realm name. The
domain name is prefixed by a period (.).
The following example shows the parameters for the Hadoop domain_realm if the Informatica domain
does not use Kerberos authentication:
[domain_realm]
.hadoop_ad_realm.com = HADOOP-AD-REALM
hadoop_ad_realm.com = HADOOP-AD-REALM
Prerequisite Tasks for Running Mappings on a Hadoop Cluster with Kerberos Authentication
19
The following example shows the parameters for the Hadoop domain_realm if the Informatica domain
uses Kerberos authentication:
[domain_realm]
.infa_ad_realm.com = INFA-AD-REALM
infa_ad_realm.com = INFA-AD-REALM
.hadoop_mit_realm.com = HADOOP-MIT-REALM
hadoop_mit_realm.com = HADOOP-MIT-REALM
The following example shows the content of krb5.conf with the required properties for an Informatica domain
that does not use Kerberos authentications:
[libdefaults]
default_realm = HADOOP-AD-REALM
[realms]
HADOOP-AD-REALM = {
kdc = l23abcd134.hadoop-ad-realm.com
admin_server = 123abcd124.hadoop-ad-realm.com
}
[domain_realm]
.hadoop_ad_realm.com = HADOOP-AD-REALM
hadoop_ad_realm.com = HADOOP-AD-REALM
The following example shows the content of krb5.conf with the required properties for an Informatica domain
that uses Kerberos authentication:
[libdefaults]
default_realm = INFA-AD-REALM
[realms]
INFA-AD-REALM = {
kdc = abc123.infa-ad-realm.com
admin_server = abc123.infa-ad-realm.com
}
HADOOP-MIT-REALM = {
kdc = def456.hadoop-mit-realm.com
admin_server = def456.hadoop-mit-realm.com
}
[domain_realm]
.infa_ad_realm.com = INFA-AD-REALM
infa_ad_realm.com = INFA-AD-REALM
.hadoop_mit_realm.com = HADOOP-MIT-REALM
hadoop_mit_realm.com = HADOOP-MIT-REALM
20
Create matching operating system profile user names on each Hadoop cluster node.
2.
Create the principal name for the Data Integration Service in the KDC and keytab file.
3.
Specify the Kerberos authentication properties for the Data Integration Service.
Step 2. Create the Principal Names and Keytab File in the AD KDC
Create an SPN in the KDC database for Microsoft Active Directory service that matches the user name of the
user that runs the Data Integration Service. Create a keytab file for the SPN on the machine where the KDC
runs. Then, copy the keytab file to the machine where the Data Integration Service runs.
To create an SPN and Keytab file in the Active Directory server, complete the following steps:
Create a user in the Microsoft Active Directory Service.
Login to the machine on which the Microsoft Active Directory Service runs and create a user with the
same name as the user you created in Step 1. Create Matching Operating System Profile Names on
page 21.
Create an SPN associated with the user.
Use the following guidelines when you create the SPN and keytab files:
The user principal name (UPN) must be the same as the SPN.
Use the ktpass utility to create an SPN associated with the user and generate the keytab file.
For example, enter the following command:
ktpass -out infa_hadoop.keytab -mapuser joe -pass tempBG@2008 -princ joe/
domain12345@INFA-AD-REALM -crypto all
The -out parameter specifies the name and path of the keytab file. The -mapuser parameter is the
user to which the SPN is associated. The -pass parameter is the password for the SPN in the
generated keytab. The -princ parameter is the SPN.
Running Mappings in the Hadoop Environment when Informatica Does not Use Kerberos Authentication
21
2.
Create matching operating system profile user names on each Hadoop cluster node.
3.
Create the Service Principal Name and Keytab File in the Active Directory Server.
4.
Specify the Kerberos authentication properties for the Data Integration Service.
22
To set up the cross-realm trust, you must complete the following steps:
1.
Configure the Active Directory server to add the local MIT realm trust.
2.
3.
Translate principal names from the Active Directory realm to the MIT realm.
Enter the following command to add the MIT KDC host name:
ksetup /addkdc <mit_realm_name> <kdc_hostname>
For example, enter the command to add the following values:
ksetup /addkdc HADOOP-MIT-REALM def456.hadoop-mit-realm.com
2.
Enter the following command to add the local realm trust to Active Directory:
netdom trust <mit_realm_name> /Domain:<ad_realm_name> /add /realm /
passwordt:<TrustPassword>
For example, enter the command to add the following values:
netdom trust HADOOP-MIT-REALM /Domain:INFA-AD-REALM /add /realm /passwordt:trust1234
3.
Enter the following commands based on your Microsoft Windows environment to set the proper
encryption type:
For Microsoft Windows 2008, enter the following command:
ksetup /SetEncTypeAttr <mit_realm_name> <enc_type>
For Microsoft Windows 2003, enter the following command:
ktpass /MITRealmName <mit_realm_name> /TrustEncryp <enc_type>
Note: The enc_type parameter specifies AES, DES, or RC4 encryption. To find the value for enc_type,
see the documentation for your version of Windows Active Directory. The encryption type you specify
must be supported on both versions of Windows that use Active Directory and the MIT server.
The enc_type_list parameter specifies the types of encryption that this cross-realm krbtgt principal will
support. The krbtgt principal can support either AES, DES, or RC4 encryption. You can specify multiple
encryption types. However, at least one of the encryption types must correspond to the encryption type found
in the tickets granted by the KDC in the remote realm.
For example, enter the following value:
kadmin: addprinc -e "rc4-hmac:normal des3-hmac-sha1:normal" krbtgt/HADOOP-MITREALM@INFA-AD-REALM
23
Translate Principal Names from the Active Directory Realm to the MIT Realm
To translate the principal names from the Active Directory realm into local names within the Hadoop cluster,
you must configure the hadoop.security.auth_to_local property in the core-site.xml file on all the machines in
the Hadoop cluster.
For example, set the following property in core-site.xml on all the machines in the Hadoop cluster:
<property>
<name>hadoop.security.auth_to_local</name>
<value>
RULE:[1:$1@$0](^.*@INFA-AD-REALM$)s/^(.*)@INFA-AD-REALM$/$1/g
RULE:[2:$1@$0](^.*@INFA-AD-REALM$)s/^(.*)@INFA-AD-REALM$/$1/g
DEFAULT
</value>
</property>
The user principal name (UPN) must be the same as the SPN.
Use the ktpass utility to create an SPN associated with the user and generate the keytab file.
For example, enter the following command:
ktpass -out infa_hadoop.keytab -mapuser joe -pass tempBG@2008 -princ joe/
domain12345@INFA-AD-REALM -crypto all
Note: The -out parameter specifies the name and path of the keytab file. The -mapuser parameter is
the user to which the SPN is associated. The -pass parameter is the password for the SPN in the
generated keytab. The -princ parameter is the SPN.
24
Complete the prerequisite tasks for running mappings on a Hadoop cluster with Kerberos authentication.
2.
Complete the tasks for running mappings in the Hadoop environment when Informatica uses Kerberos
authentication.
3.
Create matching operating system profile user names on the machine that runs the Data Integration
Service and each Hadoop cluster node used to run Informatica mapping jobs.
4.
Create an AD user that matches the operating system profile user you created in step 3.
5.
Use the ktpass utility to create an SPN associated with the user and generate the keytabs file.
Running Mappings in the Native Environment when Informatica Uses Kerberos Authentication
25
Complete the prerequisite tasks for running mappings on a Hadoop cluster with Kerberos authentication.
2.
Create matching operating system profile user names on the machine that runs the Data Integration
Service and each Hadoop cluster node used to run Informatica mapping jobs.
3.
Create an AD user that matches the operating system profile user you created in step 2.
4.
Use the ktpass utility to create an SPN associated with the user and generate the keytab file.
For example, enter the following command:
ktpass -out infa_hadoop.keytab -mapuser joe -pass tempBG@2008 -princ joe/
domain12345@HADOOP-AD-REALM -crypto all
The -out parameter specifies the name and path of the keytab file. The -mapuser parameter is the
user to which the SPN is associated. The -pass parameter is the password for the SPN in the
generated keytab. The -princ parameter is the SPN.
26
host name is the name or IP address of the machine that hosts the Hive server.
hive_princ_name is the SPN of the HiveServer2 service that runs on the NameNode. Use the value
set in this file:
/etc/hive/conf/hive-site.xml
To enable a user account to run mappings with Hive sources in the native environment, configure the
following properties in the Hive connection:
Bypass Hive JDBC Server
JDBC driver mode. Select the check box to use JDBC embedded mode.
Data Access Connection String
The connection string used to access data from the Hadoop data store.
The connection string must be in the following format:
jdbc:hive2://<host name>:<port>/<db>;principal=<hive_princ_name>
Where
host name is name or IP address of the machine that hosts the Hive server.
hive_princ_name is the SPN of the HiveServer2 service that runs on the NameNode. Use the value
set in this file:
/etc/hive/conf/hive-site.xml
27
Copy hive-site.xml from the machine on which the Data Integration Service runs to a Developer Tool
client installation directory. hive-site.xml is located in the following directory:
<Informatica Installation Directory>/services/shared/hadoop/<Hadoop distribution
version>/conf/.
Copy hive-site.xml to the following location:
<Informatica Installation Directory>\clients\hadoop\<Hadoop_distribution_version>\conf
28
2.
3.
4.
In krb5.ini, verify the value of the forwardable option to determine how to use the kinit command. If
forwardable=true, use the kinit command with the -f option. If forwardable=false, or if the option is
not specified, use the kinit command without the -f option.
5.
Run the command from the command prompt of the machine on which the Developer tool runs to
generate the Kerberos credentials file. For example, run the following command: kinit joe/
domain12345@MY-REALM.
Note: You can run the kinit utility from the following location: <Informatica Installation Directory>
\clients\java\bin\kinit.exe
6.
Launch the Developer tool and import the Hive, HBase, and complex file sources.
2.
3.
4.
5.
6.
Add the following value to the JVM Command Line Options field:
DINFA_HADOOP_DIST_DIR=<Informatica installation directory>/services/shared/hadoop/
<hadoop_distribution>.
29
CHAPTER 3
User Impersonation, 30
User Impersonation
You can enable different users to run mappings in a Hadoop cluster that uses Kerberos authentication or
connect to big data sources and targets that use Kerberos authentication. To enable different users to run
mappings or connect to big data sources and targets, you must configure user impersonation.
You can configure user impersonation for the native or Hadoop environment.
Before you configure user impersonation, you must complete the following prerequisites:
Complete the prerequisite tasks for running mappings on a Hadoop cluster with Kerberos authentication.
If the Hadoop cluster uses MapR, create a proxy directory for the user who will impersonate other users.
If the Hadoop cluster does not use Kerberos authentication, you can specify a user name in the Hadoop
connection to enable the Data Integration Service to impersonate that user.
If the Hadoop cluster uses Kerberos authentication, you must specify a user name in the Hadoop connection.
30
Go to the following directory on the machine on which the Data Integration Service runs:
<Informatica installation directory>/services/shared/hadoop/mapr_<version>/conf
2.
3.
4.
5.
Verify the following details for the user that you want to impersonate with the Data Integration Service
user:
Has the same user-id and group-id on machine on which the Data Integration Service runs as well as
the Hadoop cluster.
Create a file for the Data Integration Service user that impersonates other users.
Run the following command:
touch <Informatica installation directory>/services/shared/hadoop/mapr_<version>/
conf/proxy/<username>
For example, to create a file for the Data Integration Service user named user1 that is used to
impersonate other users, run the following command:
touch $INFA_HOME/services/shared/hadoop/mapr_<version>/conf/proxy/user1
6.
7.
8.
31
Enable the SPN of the Data Integration Service to impersonate another user named Bob to run Hadoop
jobs.
2.
Specify Bob as the user name for the Data Integration Service to impersonate in the Hadoop connection
or Hive connection.
Note: If you create a Hadoop connection, you must use user impersonation.
<name>hadoop.proxyuser.bob.groups</name>
<value>group1,group2</value>
<description>Allow the superuser <DIS_user> to impersonate any members
of the group group1 and group2</description>
</property>
<property>
<name>hadoop.proxyuser.bob.hosts</name>
<value>host1,host2</value>
<description>The superuser can connect only from host1 and host2 to
impersonate a user</description>
</property>
32
2.
3.
4.
Specify the URL for the Hadoop cluster in the Hive, HBase, or HDFS connection.
5.
Configure the mapping impersonation property that enables user Bob to run the mapping in the native
environment.
33
Click Edit to edit the Launch Jobs as Separate Processes property in the execution options for the
Data Integration Service properties.
2.
If you enable the property, you must specify the location of the krb5.conf in the Java Virtual Manager
(JVM) Options as a custom property in the Data Integration Service process. krb5.conf is located in
the following directory:<Informatica Installation Directory>/services/shared/security.
If you disable the property, you must specify the Java Command Line Options property in the
Advanced Properties of the Data Integration Service process.
Step 4. Specify the URL for the Hadoop Cluster in the Connection
Properties
In the Administrator or Developer tool, specify the URL for the Hadoop cluster on which the Hive, HBase, or
HDFS source or target resides. Configure the Hive, HBase, or HDFS connection properties to specify the
URL for the Hadoop cluster.
In the Hive connection, configure the properties to access Hive as a source or a target.
In the HBase connection, configure the Kerberos authentication properties.
Int the HDFS connection, configure the NameNode URI property.
Launch the Developer tool and open the mapping that you want to run.
The mapping opens in the editor.
2.
3.
4.
To enable another user to run the mapping, click Mapping Impersonation User Name and enter the
value in the following format:
<Hadoop service name>/<Hostname>@<YOUR-REALM>.
Where
Hadoop service name is the name of the Hadoop service on which the Hive, HBase, or HDFS source
or target resides.
Hostname is the name or IP address of the machine on which the Hadoop service runs. The
hostname is optional.
The following special characters can only be used as delimiters: '/' and '@'.
5.
34
CHAPTER 4
On every node in the Hadoop cluster, create an operating system user account.
For example, to create a user account named "blaze", run the following command on every node in the
cluster:
useradd blaze
2.
35
For example, the user account for the Blaze engine must be able to read from HDFS.
3.
4.
36
In the connection properties, use the user account you created in step 1 for the Blaze Service User
Name.
Index
A
authentication
infrastructure security 10
Kerberos 10
authorization
HDFS permissions 11
infrastructure security 11
C
cloudera navigator
data management 12
cross-realm trust
Kerberos authentication 22
D
data management
cloudera navigator 12
Metadata Manager 12
data security
Dynamic Data Masking 12
Persistent Data Masking 12
Secure@Source 12
H
HDFS permissions
authorization 11
I
infrastructure security
authentication 10
K
Kerberos authentication
authenticating a Hive connection 26
authenticating an HBase connection 27
authenticating an HDFS connection 28
cross-realm trust 22
Hadoop warehouse permissions 16
impersonating another user 32
Informatica domain with Kerberos authentication 22
Informatica domain without Kerberos authentication 20
JCE Policy File 16
Kerberos security properties 16
mappings in a native environment 25, 26
metadata import 28
operating system profile names 21
overview 14
prerequisites 15
requirements 15
user impersonation 30
user impersonation in the native environment 33
M
Metadata Manager
data management 12
U
user impersonation
Hadoop environment 31
impersonating another user 32
user name 32
37