SparkStreaming处理kafka作为数据源
所以我们要创建的是kafka的Dstream,那么就要使用到KafkaUtils下的createStream,先来看一下ctrl点进去查看,然后来写参数
package date_10_16_SparkStreamingimport org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka.KafkaUtils
object kafkaSource {def main(args: Array[String]): Unit = {//使用SparkStreaming完成wordcount//配置对象val conf = new SparkConf().setMaster("local[*]").setAppName("wordcount")//实时数据分析的环境对象//StreamingContext需要两个参数,一个conf,一个是采集周期val streamingContext = new StreamingContext(conf,Seconds(5))//从kafka采集数据val kafkaStream = KafkaUtils.createStream(streamingContext,"chun1:2181","chun",Map("chun"->3))//将采集的数据进行分解(扁平化)val wordToSumDstream = kafkaStream.flatMap(_._2.split(" ")).map((_,1)).reduceByKey(_+_)wordToSumDstream.print()//这里不能停止采集功能,也就是streamingContext不能结束//可以简单理解为启动采集器streamingContext.start()//Driver等待采集器,采集器不挺Driver不停止streamingContext.awaitTermination()}
}
开启kafka,输入数据
kafka-console-producer.sh --broker-list chun1:2181 --topic chun
a a a a
a
a
a
a
a
a a a
a a a
idea里查看结果