一、安装版本
spark-2.3.3-bin-hadoop2.7.tgz
下载地址:
链接:https://pan.baidu.com/s/1CwvZzQEeqE7uScblVL7zTg
提取码:oey0
二、安装Spark
- 解压安装包
tar -zxvf spark-2.3.3-bin-hadoop2.7.tgz -C /opt/software
其中-C 表示指定压缩文件路径
- 给权限
chown -R root /opt/software/spark-2.3.3-bin-hadoop2.7
三、配置文件
所有配置文件都存放在Spark安装目录/conf下,需要进入对应的路径
-
slaves
复制文件:cp slaves.template slaves
将‘localhost’替换换为:slave1 slave2
-
spark-env.sh
复制文件:cp spark-env.sh.template spark-env.sh
在最后一行添加:export JAVA_HOME=/opt/software/jdk1.8.0_131 export SPARK_HOME=/opt/software/spark-2.3.3-bin-hadoop2.7 export SPARK_PID_DIR=${SPARK_HOME}/pids export HADOOP_CONF_DIR=/opt/software/hadoop-2.9.2/etc/hadoop
-
vim /etc/profile.d/spark.sh
① 输入:export SPARK_HOME=/opt/software/spark-2.3.3-bin-hadoop2.7 export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
② 配完后将所有文件复制到其他两个节点上:
scp -r /opt/software/spark-2.3.3-bin-hadoop2.7 root@slave1:/opt/software/spark-2.3.3-bin-hadoop2.7 scp -r /opt/software/spark-2.3.3-bin-hadoop2.7 root@slave2:/opt/software/spark-2.3.3-bin-hadoop2.7 scp -r /etc/profile.d/spark.sh root@slave1:/etc/profile.d/spark.sh scp -r /etc/profile.d/spark.sh root@slave2:/etc/profile.d/spark.sh
③ 给权限到其他两个节点上
chown -R root /opt/software/spark-2.3.3-bin-hadoop2.7
④ 重启:reboot
四、启动Spark
使用命令:/opt/software/spark-2.3.3-bin-hadoop2.7/sbin/start-all.sh
注意此处的文件的路径在自己安装Spark的路径中
五、验证Spark方法
- 方法一:使用命令jps
主节点:
从节点:
- 方法二:进入浏览器访问网页http://192.168.148.170:8080
注意此处的IP为自己的IP<\font>