目录
第一步:准备zookeeper环境
(1)下载 zookeeper-3.4.14.tar.gz ,解压,把conf 文件夹下面的 zoo.templet.cfg 改成zoo.cfg
(2)启动zookeeper :cmd 到 zookeeper-3.4.14/bin文件夹下 输入zkServer.cmd
第二步:准备kafka环境
(1)下载版本kafka_2.11-2.3.1 解压, 修改config 目录下的server.properties 文件,这只log.dirs参数 log.dirs=C:/hnn/kafka/kafka_2.11-2.3.1/logs
(2)启动kafka cmd到 C:\hnn\kafka\kafka_2.11-2.3.1 目录下 执行 .\bin\windows\kafka-server-start.bat .\config\server.properties
(3)创建topic
>kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sparkStreamingTest
创建成功显示以下:Created topic sparkStreamingTest.
(4)打开生产者窗口 :kafka-console-producer.bat --broker-list localhost:9092 --topic sparkStreamingTest
(5)打开消费者窗口:.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test
第三步:开发代码
添加依赖:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>2.1.0</version>
</dependency>
开发代码
package com.spark.self
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.storage.StorageLevel
import org.apache.log4j.{Level, Logger}
import kafka.serializer.StringDecoder
object WordCountSprakStreaming {
val numThreads = 1
// val topics = "test"
val topics = "sparkStreamingTest"
val zkQuorum = "localhost:2181"
val group = "consumer1"
val brokers = "localhost:9092"
def main(args: Array[String]): Unit = {
receiver
// direct
}
def receiver() = {
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val conf = new SparkConf().setAppName("kafka test").setMaster("local[2]")
val ssc = new StreamingContext(conf, Seconds(10));
ssc.checkpoint("/out")
val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
val updateFunc = (curVal: Seq[Int], preVal: Option[Int]) => {
//进行数据统计当前值加上之前的值
var total = curVal.sum
//最初的值应该是0
var previous = preVal.getOrElse(0)
//Some 代表最终的返回值
Some(total + previous)
}
val words = lines.flatMap(_.split(" ")).map(x => (x, 1))
words.reduceByKey(_ + _).updateStateByKey(updateFunc).print()
ssc.start()
ssc.awaitTermination()
}
def direct() = {
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val conf = new SparkConf().setMaster("local[2]").setAppName("kafka test")
val ssc = new StreamingContext(conf, Seconds(10))
val topicsSet = topics.split(",").toSet
val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, topicsSet)
val lines = messages.map(_._2)
val words = lines.flatMap(_.split(" ")).map(x => (x, 1))
words.reduceByKey(_ + _).print()
ssc.start()
ssc.awaitTermination()
}
}
第四步:启动SparkStreaming 程序
第五步:生产数据,如下所示:
控制台显示如下结果:
总结:
1. 以下两端代码是设置日志的等级
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
2.以下代码的作用是做数据的累加,如果不加这段话,每10秒都会统计这10秒之内产生的数据不会累计,可以注释试试
所以结果中需要更新数据 需要增加以下代码:updateStateByKey(updateFunc) 如果需要累计结果还需要设置checkPoint(path)否则也会报错错误如下:
val updateFunc = (curVal: Seq[Int], preVal: Option[Int]) => {
//进行数据统计当前值加上之前的值
var total = curVal.sum
//最初的值应该是0
var previous = preVal.getOrElse(0)
//Some 代表最终的返回值
Some(total + previous)
}