目录
一、环境准备
1、spark官网
Apache Spark™ - Unified Engine for large-scale data analytics
2、下载地址
3、官方文档
Overview - Spark 3.2.0 Documentation
4、SSH免密配置
大数据入门之 ssh 免密码登录_qq262593421的博客-CSDN博客
5、Scala2.12安装
Linux 安装 scala2.12.11_qq262593421的博客-CSDN博客
二、解压安装
1、下载spark
注意:Spark2.4.0依赖Scala2.11环境,Spark3.0.0依赖Scala2.12环境,这里适用2.4.0和3.0.0两个版本
wget -p /usr/local/hadoop/ https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
2、解压文件
tar zxpf spark-2.4.0-bin-hadoop2.7.tgz -C /usr/local/hadoop
3、创建软链接
ln -s /usr/local/hadoop/spark-2.4.0-bin-hadoop2.7 /usr/local/hadoop/spark
三、修改配置文件
1、slaves配置
echo 'hadoop003
hadoop004
hadoop005
hadoop006' > /usr/local/hadoop/spark/conf/slaves
2、spark-env.sh配置
vim /usr/local/hadoop/spark/spark-env.sh
export JAVA_HOME=/usr/java/jdk1.8
export SCALA_HOME=/usr/local/hadoop/scala
export MYSQL_HOME=/usr/local/mysql
export CLASSPATH=.:/usr/java/jdk1.8/lib/dt.jar:/usr/java/jdk1.8/lib/tools.jar
export SPARK_HOME=/usr/local/hadoop/spark
export HADOOP_HOME=/usr/local/hadoop/hadoop
export HBASE_HOME=/usr/local/hadoop/hbase
export GEOMESA_HBASE_HOME=/usr/local/hadoop/geomesa-hbase
export ZOO_HOME=/usr/local/hadoop/zookeeper
export SPARK_WORKING_MEMORY=16G
# export SPARK_MASTER_IP=hadoop001
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/
export YARN_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/
export SPARK_LOCAL_DIRS=/home/spark/tmp
export SPARK_HISTORY_OPTS="
-Dspark.history.ui.port=18080
-Dspark.history.fs.logDirectory=hdfs://ns1/spark/directory
-Dspark.history.retainedApplications=30"
SPARK_MASTER_WEBUI_PORT=8989
export SPARK_DAEMON_JAVA_OPTS="
-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=hadoop001,hadoop002,hadoop003
-Dspark.deploy.zookeeper.dir=/spark"
3、metrics.properties配置
vim /usr/local/hadoop/spark/conf/metrics.properties
*.sink.csv.directory=/home/spark/tmp/csv/
4、spark-defaults.conf配置
vim /usr/local/hadoop/spark/conf/spark-defaults.conf
spark.local.dir /home/spark/tmp
spark.eventLog.enabled true
spark.eventLog.dir hdfs://ns1/spark/directory
spark.yarn.jars hdfs://ns1/spark/jars/*.jar
spark.serializer org.apache.spark.serializer.KryoSerializer
四、环境变量配置
1、环境变量配置
echo '
## spark config
export SPARK_HOME=/usr/local/hadoop/spark
export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
2、环境变量立即生效
source /etc/profile
echo $SPARK_HOME
五、HDFS上传Spark jar包
1、创建HDFS spark jar路径
hadoop fs -mkdir /spark/jars
2、上传spark jar包到hdfs
hadoop fs -put $SPARK_HOME/jars/* /spark/jars/
六、启动spark
1、启动spark master(hadoop001)
$SPARK_HOME/sbin/start-all.sh
2、启动spark 备用master(hadoop002)
$SPARK_HOME/sbin/start-master.sh
3、在master上启动日志服务
$SPARK_HOME/sbin/start-history-server.sh
七、Spark环境测试
1、spark shell命令
spark-shell
2、本地模式测试
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[*] \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10
3、指定Mater测试
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://hadoop001:7077 \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10
4、Spark On Yarn模式运行
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10
5、Spark Kill Application
yarn application -kill <applicationId>
6、Master Web UI
http://hadoop001:8989/
http://hadoop002:8989/
7、HistoryServer WebUI
http://hadoop001:18080/