spark在hadoop2.2.0 HA配置下的问题

在Hadoop2.2.0的高可用(HA)配置下,Spark遇到使用mycluster作为HDFS路径时无法找到的问题。通过检查Namenode状态并指定active Namenode的主机名,问题得到解决。为使Spark继承Hadoop集群配置,需将Hadoop的配置文件包含在Spark的类路径中,通常位于/hadoop2.2.0/etc/hadoop目录下。通过在spark-env.sh中设置HADOOP_CONF_DIR变量,可以确保Spark能找到配置文件并正确解析HA环境下的namenode名称。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

scala> val rdd1 = sc.textFile("hdfs://mycluster/spark/spark02/directory/")
14/07/19 21:15:23 INFO MemoryStore: ensureFreeSpace(138763) called with curMem=0, maxMem=309225062
14/07/19 21:15:23 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 135.5 KB, free 294.8 MB)
rdd1: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at <console>:12

scala> rdd1.toDebugString
java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:231)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:172)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$debugString$1(RDD.scala:1194)
    at org.apache.spark.rdd.RDD.toDebugString(RDD.scala:1197)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
    at $iwC$$iwC$$iwC.<init>(<console>:20)
    at $iwC$$iwC.<init>(<console>:22)
    at $iwC.<init>(<console>:24)
    at <init>(<console>:26)
    at .<init>(<console>:30)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:601)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:608)
    at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:611)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:936)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: mycluster
    ... 66 more


scala>



我的hadoop2.2.0配置成了HA。这里使用mycluster作为hdfs的文件路径提示找不到。不知道怎么回事。

我通过查看Namenode的状态。将val rdd1 = sc.textFile("hdfs://master:8020/spark/spark02/week2/directory/")

改成active的namenode的主机名master:8020就可以正常执行rdd1.toDebugString命令。


查看spark的官网,发现:

Inheriting Cluster Configuration

If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files thatshould be included on Spark’s classpath:

  • hdfs-site.xml, which provides default behaviors for the HDFS client.
  • core-site.xml, which sets the default filesystem name.

The location of these configuration files varies across CDH and HDP versions, buta common location is inside of /etc/hadoop/conf. Some tools, such as Cloudera Manager, createconfigurations on-the-fly, but offer a mechanisms to download copies of them.

To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/spark-env.sh to a location containing the configuration files.

于是我在spark-env.sh中添加了

export HADOOP_CONF_DIR=/hadoop2.2.0/etc/hadoop

再次执行上面的代码,就可以正常执行了。


可见使用hadoop的HA时,因为添加了一个dfs.nameservices=mycluster代表整个集群的namenode。spark环境中如果找不到hadoop的配置文件就不能正确解析mycluster这个名称。

非HA配置下就无需添加这个HADOOP_CONF_DIR变量了,因为namenode只有一个。


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值