测试文件及环境
测试文件在本地D://tmp/spark.txt
,Spark采用Local模式运行,Spark版本3.2.0,Scala版本2.12,集成idea开发环境。
hello
world
java
world
java
java
实验代码
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}object GroupBy {def main(args: Array[String]): Unit = {// 创建Spark执行环境val sparkConf: SparkConf =new SparkConf().setMaster("local").setAppName("GroupBy")// 新建会话val sc = new SparkContext(sparkConf)// 读取本地文件到RDDval rdd: RDD[String] = sc.textFile("D://tmp/spark.txt")// 对rdd做map映射,返回(hello,1)...val rdd2: RDD[(String, Int)] = rdd.map(v => {val arr: Array[String] = v.split("\t")(arr(0), 1)})// 打印map映射结果rdd2.foreach(v=>println(v))// 对rdd2进行groupBy操作val rdd3: RDD[(String, Iterable[(String, Int)])] = rdd2.groupBy(v => v._1)// 遍历打印最终结果rdd3.map(v => (v._1, v._2.size)).foreach(v => println(v))//结束Spark会话sc.stop()}
}
实验结果
打印map映射结果
(hello,1)
(world,1)
(java,1)
(world,1)
(java,1)
(java,1)
(hello,1)
(java,3)
(world,2)