快速搭建kerberos认证的HDFS环境

本文详细介绍了如何在一台服务器上搭建HDFS单机环境,包括KDC单机Kerberos认证的配置步骤,如安装Kerberos、设置kdc.conf和krb5.conf,以及Hadoop的安装和配置,涉及krb5kdc服务、keytab文件生成和使用、Hadoop配置等关键步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1)、搭建hdfs单机服务器搭建

2)、kdc单机kerberos认证

我的服务器:192.168.1.166

1、安装kerberos

1.1 执行命令:yum -y install krb5-libs krb5-server krb5-workstation

1.2 修改host文件:vim /etc/hosts,加入
192.168.1.166 myli
192.168.1.166 kerberos.example.com

1.3 KDC中服务器涉及到三个配置文件

/etc/krb5.conf
/var/kerberos/krb5kdc/kdc.conf
/var/kerberos/krb5kdc/kadm5.acl

krb5.conf

includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
 default_realm = EXAMPLE.COM
# default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

kdc.conf

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 EXAMPLE.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  max_renewable_life = 7d
  max_life = 1d
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }
~          

为了能够不直接访问 KDC 控制台而从 Kerberos 数据库添加和删除主体,请对 Kerberos 管理服务器指示允许哪些主体执行哪些操作。通过编辑文件 /var/lib/kerberos/krb5kdc/kadm5.acl 完成此操作。ACL(访问控制列表)允许您精确指定特权。

$ cat /var/kerberos/krb5kdc/kadm5.acl
  */admin@EXAMPLE.COM     *

1.4 kdb5_util create -r EXAMPLE.COM -s # 另一个终端 cat /dev/sda > /dev/urandom,往随机池写入,加快速度。

1.5 新建密码 kadmin.local -q "addprinc admin/admin" # 管理员,新建密码

1.6 运行

 chkconfig --level 35 krb5kdc on
 chkconfig --level 35 kadmin on
 service krb5kdc start
 service kadmin start

1.7 创建用户、生成keytab文件

kadmin.local -q 'addprinc -randkey hdfs/myli@EXAMPLE.COM' # 新建用户
kadmin.local -q 'addprinc -randkey HTTP/myli@EXAMPLE.COM'
kadmin.local -q 'xst -k hdfs.keytab hdfs/myli@EXAMPLE.COM' # 生成keytab文件
kadmin.local -q 'xst -k HTTP.keytab HTTP/myli@EXAMPLE.COM'

klist -kt hdfs.keytab # keytab中的用户列表
kinit -kt hdfs.keytab hdfs/myli@EXAMPLE.COM  # 指定用户登陆
klist  # 列出已登陆用户
kinit -R #刷新ticket
kdestroy  # 退出

将生成的文件HTTP.keytab,hdfs.keytab放到自己指定的文件下(后边hadoop需要指向)我的是放在:/home/hdfs

2、安装Hadoop

2.1 增加用户:useradd hdfs

2.2 生成的文件HTTP.keytab,hdfs.keytab放到指定目录

cp hdfs.keytab /home/hdfs/
cp HTTP.keytab /home/hdfs/
chown hdfs:hdfs /home/hdfs/*.keytab

2.3 依次安装执行

yum -y install java-1.8.0-openjdk-devel java
yum -y groupinstall 'Development Tools'  # 编译jsvc

su - hdfs
wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
wget https://archive.apache.org/dist/commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz
wget https://archive.apache.org/dist/commons/daemon/source/commons-daemon-1.0.15-src.tar.gz

tar xf hadoop-2.7.1.tar.gz
tar xf commons-daemon-1.0.15-bin.tar.gz
tar xf commons-daemon-1.0.15-src.tar.gz
cd commons-daemon-1.0.15-src/src/native/unix/
./configure --with-java=/usr/lib/jvm/java-openjdk
make
cp jsvc ~/hadoop-2.7.1/libexec/
cd
rm ~/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar
cp commons-daemon-1.0.15/commons-daemon-1.0.15.jar ~/hadoop-2.7.1/share/hadoop/hdfs/lib/

cd hadoop-2.7.1

# etc/hadoop/hadoop-env.sh
sed -i 's/JAVA_HOME=.*/JAVA_HOME=\/usr\/lib\/jvm\/java-openjdk/g' etc/hadoop/hadoop-env.sh
sed -i 's/#.*JSVC_HOME=.*/export JSVC_HOME=\/home\/hdfs\/hadoop-2.7.1\/libexec/g' etc/hadoop/hadoop-env.sh
sed -i 's/HADOOP_SECURE_DN_USER=.*/HADOOP_SECURE_DN_USER=hdfs/g' etc/hadoop/hadoop-env.sh

2.4 编辑hadoop配置文件:
进入cd hadoop-2.7.1/etc/hadoop

core-site.xml修改

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.1.166:8020</value>
    </property>
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
    </property>
    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
    </property>
</configuration>

 hdfs-site.xml 修改

<property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
   <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
    </property>
    <property>
      <name>dfs.datanode.data.dir.perm</name>
      <value>700</value>
    </property>
    <property>
      <name>dfs.namenode.keytab.file</name>
      <value>/home/hdfs/hdfs.keytab</value>
    </property>
    <property>
      <name>dfs.namenode.kerberos.principal</name>
      <value>hdfs/myli@EXAMPLE.COM</value>
    </property>
    <property>
      <name>dfs.namenode.kerberos.https.principal</name>
      <value>HTTP/myli@EXAMPLE.COM</value>
    </property>

    <property>
      <name>dfs.datanode.address</name>
      <value>0.0.0.0:1004</value>
    </property>
 <property>
      <name>dfs.datanode.http.address</name>
      <value>0.0.0.0:1006</value>
    </property>
    <property>
      <name>dfs.datanode.keytab.file</name>
      <value>/home/hdfs/hdfs.keytab</value>
    </property>
    <property>
      <name>dfs.datanode.kerberos.principal</name>
      <value>hdfs/myli@EXAMPLE.COM</value>
    </property>
    <property>
      <name>dfs.datanode.kerberos.https.principal</name>
      <value>HTTP/myli@EXAMPLE.COM</value>
    </property>

    <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
    </property>
    <property>
      <name>dfs.web.authentication.kerberos.principal</name>
      <value>HTTP/myli@EXAMPLE.COM</value>
    </property>
    <property>
      <name>dfs.web.authentication.kerberos.keytab</name>
      <value>/home/hdfs/HTTP.keytab</value>
    </property>
    <property>
      <name>dfs.encrypt.data.transfer</name>
      <value>true</value>
    </property>
    <property>
      <name>dfs.encrypt.data.transfer</name>
      <value>true</value>
    </property>
   <property>
      <name>dfs.permissions.enabled</name>
      <value>false</value>
    </property>

2.5 启动hadoop

bin下执行:./hdfs namenode -format

sbin下执行:./start-dfs.sh

用root执行

service iptables stop
cd /home/hdfs/hadoop-2.7.1
./hadoop-daemon.sh start datanode #sbin下

执行jps # 确认有三个进程,jps, NameNode, 没有名字进程

2.6 浏览器访问:http://192.168.1.166:50070/

2.7 登陆kerberos后才能执行hdfs命令

kinit -kt hdfs.keytab hdfs/myli@EXAMPLE.COM #登陆用户
#在bin下执行:
./hdfs dfs -mkdir /upload  #创建文件夹
./hdfs dfs -ls /upload #查看文件夹下的文件
./hdfs dfs -put /home/hdfs/222.txt /upload #上传文件到/upload

遇到的问题:
1、多次启动执行./hdfs namenode -format,导致clusterid不一致,查看信息用如下命令

cat /tmp/hadoop-hdfs/dfs/data/current/VERSION
cat /tmp/hadoop-root/dfs/name/current/VERSION

解决方式:查询网上的解决方案,可以删除生成的data文件,重新启动重新生成;
 

2、kinit -R执行,报错
kinit: KDC can't fulfill requested option while renewing credentials

解决参考:kinit: KDC can‘t fulfill requested option while renewing credentials_kinit: kdc can't fulfill requested option while re-CSDN博客

3、文件归属权限hdfs操作不了root( Login failure for user: hdfs/myli@EXAMPLE from keytab /home/hdfs/hdfs.keytab javax.security.auth.login.LoginException: Unable to obtain password from user)

4、卸载kerberos遇到的坑
卸载过程(非必要不要执行,执行的话一定要打开几个备用终端的页面):

1. 关闭KDC服务
systemctl stop krb5kdc
systemctl stop kadmin

2卸载(所有节点)
rpm -e krb5-devel-1.15.1-18.el7.x86_64 --nodeps
rpm -e krb5-libs-1.15.1-18.el7.x86_64 --nodeps
rpm -e krb5-workstation-1.15.1-18.el7.x86_64 --nodeps
rpm -e libkadm5-1.15.1-18.el7.x86_64 –nodeps
#只在服务端多执行下面命令
rpm -e krb5-server-1.15.1-18.el7.x86_64 –nodeps   

3 删除数据库
rm -rf /var/kerberos/krb5kdc/principal*

然后就发现:你的服务器连接不上了

libgssapi_krb5.so.2: cannot open shared object file: No such file or directory

rpm包krb5-libs被卸载,导致机器ssh不上

解决方式:
上传krb5-libs-1.15.1-50.el7.x86_64.rpm文件到服务器,

执行命令:rpm -ivh krb5-libs-1.15.1-50.el7.x86_64.rpm

至此,拯救服务器成功!

常用命令:
 

./hdfs dfs -mkdir /upload
./hdfs dfs -ls /upload
 ./hdfs dfs -put /home/hdfs/222.txt /upload
 ./hdfs dfs -rm -r /upload

Bin:
./hdfs namenode -format

sbin:
./hadoop-daemon.sh stop datanode
./stop-dfs.sh 
//启动root执行
./start-dfs.sh
./hadoop-daemon.sh start datanode

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值