HDFS 3.4.1 集成Kerberos 实现账户认证

HDFS集成Kerberos 实现账户认证调研

准备环境

安装本软件,涉及的软件包为:

  • jdk1.8.0_161.tgz
  • hadoop-3.4.1.tar.gz
  • zookeeper-3.8.4.tar.gz
  • Kerberos服务已经部署完成,涉及的11台服务器都有Kerberos Client,部署参考CentOS 7.3环境中部署Kerberos集群

部署集成

HDFS相关服务信息规划

zookeeper并未开启Kerberos认证

IP 服务 主机名 Principal主体
192.168.1.1 NameNode hadoop3test1-01.test.com nn/hadoop3test1-01.test.com
192.168.1.2 NameNode hadoop3test1-02.test.com nn/hadoop3test1-02.test.com
192.168.1.3 JournalNode hadoop3test1-03.test.com jn/hadoop3test1-03.test.com
192.168.1.4 JournalNode hadoop3test1-04.test.com jn/hadoop3test1-04.test.com
192.168.1.5 JournalNode hadoop3test1-05.test.com jn/hadoop3test1-05.test.com
192.168.1.6 DataNode hadoop3test1-06.test.com dn/hadoop3test1-06.test.com
192.168.1.7 DataNode hadoop3test1-07.test.com dn/hadoop3test1-07.test.com
192.168.1.8 DataNode hadoop3test1-08.test.com dn/hadoop3test1-08.test.com
192.168.1.9 Router hadoop3test1-09.test.com router/hadoop3test1-09.test.com
192.168.1.10 Router hadoop3test1-10.test.com router/hadoop3test1-10.test.com
192.168.1.11 Router hadoop3test1-11.test.com router/hadoop3test1-11.test.com
192.168.1.1 HTTP hadoop3test1-01.test.com HTTP/hadoop3test1-01.test.com
192.168.1.2 HTTP hadoop3test1-02.test.com HTTP/hadoop3test1-02.test.com
192.168.1.3 HTTP hadoop3test1-03.test.com HTTP/hadoop3test1-03.test.com
192.168.1.4 HTTP hadoop3test1-04.test.com HTTP/hadoop3test1-04.test.com
192.168.1.5 HTTP hadoop3test1-05.test.com HTTP/hadoop3test1-05.test.com
192.168.1.6 HTTP hadoop3test1-06.test.com HTTP/hadoop3test1-06.test.com
192.168.1.7 HTTP hadoop3test1-07.test.com HTTP/hadoop3test1-07.test.com
192.168.1.8 HTTP hadoop3test1-08.test.com HTTP/hadoop3test1-08.test.com
192.168.1.9 HTTP hadoop3test1-09.test.com HTTP/hadoop3test1-09.test.com
192.168.1.10 HTTP hadoop3test1-10.test.com HTTP/hadoop3test1-10.test.com
192.168.1.11 HTTP hadoop3test1-11.test.com HTTP/hadoop3test1-11.test.com

创建相关Kerberos主体

创建hadoop各个服务的Kerberos主体

【温馨提示】-randkey是密码是随机的,-pw是指定密码的,得手动输入密码。

# 在Kerberos服务端 执行如下命令
kadmin.local -q "addprinc -pw 123456 nn/hadoop3test1-01.test.com";
kadmin.local -q "addprinc -pw 123456 nn/hadoop3test1-02.test.com";
kadmin.local -q "addprinc -pw 123456 dn/hadoop3test1-06.test.com";
kadmin.local -q "addprinc -pw 123456 dn/hadoop3test1-07.test.com";
kadmin.local -q "addprinc -pw 123456 dn/hadoop3test1-08.test.com";
kadmin.local -q "addprinc -pw 123456 jn/hadoop3test1-03.test.com";
kadmin.local -q "addprinc -pw 123456 jn/hadoop3test1-04.test.com";
kadmin.local -q "addprinc -pw 123456 jn/hadoop3test1-05.test.com";
kadmin.local -q "addprinc -pw 123456 router/hadoop3test1-09.test.com";
kadmin.local -q "addprinc -pw 123456 router/hadoop3test1-10.test.com";
kadmin.local -q "addprinc -pw 123456 router/hadoop3test1-11.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-01.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-02.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-03.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-04.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-05.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-06.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-07.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-08.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-09.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-10.test.com";
kadmin.local -q "addprinc -pw 123456 HTTP/hadoop3test1-11.test.com";

# 查看数据库里主体,可以看到刚刚创建的主体信息
kadmin.local -q  "listprincs"

# todo
# kadmin.local -q 'modprinc -maxrenewlife 360day +allow_renewable [email protected]'
创建存储 keytab 文件的路径
# 每个节点上都执行
mkdir -p /opt/keytabs;

ssh 192.168.1.1 "mkdir -p /opt/keytabs";
ssh 192.168.1.2 "mkdir -p /opt/keytabs";
ssh 192.168.1.3 "mkdir -p /opt/keytabs";
ssh 192.168.1.4 "mkdir -p /opt/keytabs";
ssh 192.168.1.5 "mkdir -p /opt/keytabs";
ssh 192.168.1.6 "mkdir -p /opt/keytabs";
ssh 192.168.1.7 "mkdir -p /opt/keytabs";
ssh 192.168.1.8 "mkdir -p /opt/keytabs";
ssh 192.168.1.9 "mkdir -p /opt/keytabs";
ssh 192.168.1.10 "mkdir -p /opt/keytabs";
ssh 192.168.1.11 "mkdir -p /opt/keytabs";
Kerberos导出keytab认证文件

将 Hadoop 服务的主体写入到 keytab 文件,按需分发至所有的keytab文件至各个节点,

有两种思路,

  1. 一种是按不同的服务将同类的账户信息写入同一个keytab文件中,如nn.service.keytabjn.service.keytabdn.service.keytabrouter.service.keytabspnego.service.keytab
  2. 另一种是将hdfs相关的账户信息写入同一个keytab文件中,如hdfs.service.keytabspnego.service.keytab

在生成keytab文件时需要加参数-norandkey,否则会导致直接使用kinit 初始化时会提示密码错误。

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/nn.service.keytab nn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/nn.service.keytab nn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/jn.service.keytab jn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/jn.service.keytab jn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/jn.service.keytab jn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/dn.service.keytab dn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/dn.service.keytab dn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/dn.service.keytab dn/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/router.service.keytab router/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/router.service.keytab router/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";

kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/router.service.keytab router/[email protected]";
kadmin.local -q "ktadd -norandkey -kt /opt/keytabs/spnego.service.keytab HTTP/[email protected]";
修改keytab文件权限

为了方便验证,将文件权限直接全部775。实际按类修改对应文件的用户和用户组,如chown -R bigdata:bigdata /opt/keytabs/hdfs.service.keytab

chmod 775 /opt/keytabs/*

scp [email protected]:/opt/keytabs/* /root/gd/keytabs/

scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;
scp /root/gd/keytabs/* [email protected]:/opt/keytabs/;

ssh 192.168.1.1 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.2 "chmod 775 /opt/keytabs/*";	
ssh 192.168.1.3 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.4 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.5 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.6 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.7 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.8 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.9 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.10 "chmod 775 /opt/keytabs/*";
ssh 192.168.1.11 "chmod 775 /opt/keytabs/*";

klist -kt命令列出指定Keytab文件中包含的Principal信息,需要该文件的读权限,例如:klist -kt /opt/keytabs/spnego.service.keytab

Secure DataNode

由于Kerberos开启后,Datanode需要同时开启Secure DataNode,Secure DataNode有两种方式,

  1. 以root用户启动DN,先绑定特权端口,然后切换成HDFS_DATANODE_SECURE_USER 指定的用户账户继续运行DN。由于 DataNode 数据传输协议不使用 Hadoop RPC 框架,因此 DataNode 必须使用由 dfs.datanode.addressdfs.datanode.http.address 指定的特权端口进行身份验证。这种身份验证基于攻击者无法在 DataNode 主机上获得 root 权限的假设。启动时必须将 HDFS_DATANODE_SECURE_USERJSVC_HOME 指定为环境变量(在 hadoop-env.sh 中)。简单来说,就是需要配置JSVC环境,指定HDFS_DATANODE_SECURE_USERdfs.datanode.addressdfs.datanode.http.address 需要配置为特权端口,最后使用root启动DN。

    jsvc 程序

    特权端口指的是Linux的特权端口号,一般是指1024以下的端口,被知名服务使用的端口,也称为知名端口。

  2. 从 2.6.0 版开始,支持使用SASL作为验证数据传输协议。需要在服务器上开启HTTS访问,需要在hdfs-site.xml中配置dfs.data.transfer.protectionauthentication,配置dfs.datanode.address为非特权端口,配置dfs.http.policyHTTPS_ONLY

    只有 2.6.0 或更高版本的 HDFS 客户端才能连接到使用 SASL 验证数据传输协议的数据节点。如果DNs中同时存在以两种方式的Secure DataNode,那么开启SASL的HDFS客户端可以同时方式这两者方式的Secure DataNode。

    dfs.data.transfer.protection

    设置该属性可启用 SASL 对数据传输协议进行身份验证。如果启用,则在启动 DataNode 进程时,dfs.datanode.address 必须使用非特权端口,dfs.http.policy 必须设置为 HTTPS_ONLYHDFS_DATANODE_SECURE_USER 环境变量必须为未定义。其值:

    1. authentication :只进行身份验证;
    2. integrity:除身份验证外还进行完整性检查;
    3. privacy:除完整性外还进行数据加密 该属性默认为未指定。

配置HTTPS访问

No.1-No.10步骤为root用户执行,为了方便,如果需要输入密码的地方,统一使用123456

  1. 每台机器执行安装openssl命令

    yum install openssl
    
  2. 生成X.509秘钥与证书

    openssl req -new -x509 -keyout bd_ca_key -out bd_ca_cert -days 36500 -subj '/C=CN/ST=jiangsu/L=nanjing/O=test/OU=test/CN=test'
    

    参数解释

    • -new: 生成一个新的证书请求。
    • -x509: 直接生成一个自签名的证书,而不是生成一个证书请求(CSR)。
    • -keyout bd_ca_key: 指定生成的私钥文件名为 bd_ca_key
    • -out bd_ca_cert: 指定生成的证书文件名为 bd_ca_cert
    • -days 36500: 设置证书的有效期为36500天(约99年)。
    • -subj "/CN=your_common_name": 设置证书的主题字段(Subject),格式为/type0=value0/type1=value1...,替换或自定义证书请求时需要输入的信息,并输出修改后的请求信息。如果value为空,则表示使用配置文件中指定的默认值,如果value值为.,则表示该项留空。其中可识别type(man req)有:C是Country国家、ST是state省/州、L是localcity城市、O是Organization组织、OU是Organizational Unit组织类单位、CN是common name,可以是自己域名。
  3. 分发秘钥(key)与证书(cert)

    # 创建ssl证书处理路径
    ssh 192.168.1.1 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.2 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.3 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.4 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.5 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.6 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.7 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.8 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.9 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.10 "mkdir -p /opt/kerberos_https";
    ssh 192.168.1.11 "mkdir -p /opt/kerberos_https";
    
    # 分发秘钥(key)与证书(cert)
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    scp /root/gd/kerberos_https/* [email protected]:/opt/kerberos_https/;
    
    
  4. 每个节点生成 keystore 文件

    # hadoop3test1-01.test.com节点上执行
    keytool -keystore /opt/kerberos_https/keystore -alias hadoop3test1-01.test.com -genkey -keyalg RSA -dname "CN=hadoop3test1-01.test.com, OU=dev, O=dev, L=nanjing, ST=jiangsu, C=CN";
    
    # hadoop3test1-01.test.com节点上执行
    keytool -keystore /opt/kerberos_https/keystore -alias hadoop3test1-02.test.com -genkey -keyalg RSA -dname "CN=hadoop3test1-02.test.com, OU=dev, O=dev, L=nanjing, ST=jiangsu, C=CN";
    
    # hadoop3test1-03.test.com节点上执行
    keytool -keystore /opt/kerberos_https/keystore -alias hadoop3test1-03.test.com -genkey -keyalg RSA -dname "CN=hadoop3test1-03.test.com, OU=dev, O=dev, L=nanjing, ST=jiangsu, C=CN";
    
    # hadoop3test1-04.test.com节点上执行
    keytool -keystore /opt/kerberos_https/keystore -alias hadoop3test1-04.test.com -genkey -keyalg RSA -dname "CN=hadoop3test1-04.test.com, OU=dev, O=dev, L=nanjing, ST=jiangsu, C=CN";
    
    # hadoop3test1-05.test.com节点上执行
    keytool -keystore /opt/kerberos_https/keystore 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

顧棟

若对你有帮助,望对作者鼓励一下

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值