SparkSQL 内置函数的使用(JAVA与Scala版本)
agg的使用(根据时间,去重id相同,统计相同时间内的id个数)
Scala版本!
package com.bynear.Scalaimport org.apache.spark.sql.functions._ import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType} import org.apache.spark.sql.{Row, SQLContext} import org.apache.spark.{SparkConf, SparkContext} object SparkSQLAgg {def main(args: Array[String]) {val conf = new SparkConf().setMaster("local").setAppName("SparkSQLAgg")val sc = new SparkContext(conf)val sqlContext = new SQLContext(sc) //构建SQL上下文 //要使用Spark SQL的内置函授,就一定要导入SQLContext下的隐式转换 import sqlContext.implicits._val userAccessLog = Array("2016-3-27,1122", "2016-3-27,1122", "2016-3-27,1123", "2016-3-27,1124", "2016-3-27,1124", "2016-3-28,1122" )val userAccessRDDLog = sc.parallelize(userAccessLog, 5)val userAccessLogRowRDD = userAccessRDDLog.map { log => Row(log.split(",")(0), log.split(",")(1).toInt) }val structType = StructType(Array(StructField("date", StringType, true), StructField("userid", IntegerType, true)))val userAccessLogRowDF = sqlContext.createDataFrame(userAccessLogRowRDD, structType); userAccessLogRowDF.groupBy("date").agg('date, countDistinct('userid)).map {row => Row(row(1), row(2))}.collect().foreach(println)} }Java版本!
package com.bynear.spark_sql; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.Function; import org.apache.spark.sql.DataFrame; import org.apache.spark.sql.Row; import org.apache.spark.sql.RowFactory; import org.apache.spark.sql.SQLContext; import org.apache.spark.sql.types.DataTypes; import org.apache.spark.sql.types.StructField; import org.apache.spark.sql.types.StructType; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import static org.apache.spark.sql.functions.*; //一定要引入: // //import static org.apache.spark.sql.functions.*; //否则无法直接使用countDistinct函数。本人在这里也是折腾了很久,最后查官网才发现这个坑的。 public class agg {public static void main(String[] args) {SparkConf conf = new SparkConf().setAppName("agg").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf); SQLContext sqlContext = new SQLContext(sc); List<String> list = Arrays.asList("2016-3-27,1122", "2016-3-27,1122", "2016-3-27,1123", "2016-3-27,1124", "2016-3-27,1124", "2016-3-28,1122" ); JavaRDD<String> userAccessRDDLog = sc.parallelize(list, 5); JavaRDD<Row> mapROWRDD = userAccessRDDLog.map(new Function<String, Row>() {@Override public Row call(String line) throws Exception {String[] LineSplit = line.split(","); return RowFactory.create(LineSplit[0], Integer.valueOf(LineSplit[1])); }}); ArrayList<StructField> fields = new ArrayList<StructField>(); fields.add(DataTypes.createStructField("date", DataTypes.StringType, true)); fields.add(DataTypes.createStructField("userid", DataTypes.IntegerType, true)); StructType structType = DataTypes.createStructType(fields); DataFrame userAccessLogRowDF = sqlContext.createDataFrame(mapROWRDD, structType); userAccessLogRowDF.groupBy("date").agg(max("userid")).show(); sc.close(); } }注意点:::
Scala版本中,如果需要使用内置函数,必须引用
import org.apache.spark.sql.functions._
Java版本,如果使用内置函数的话,必须引用
import static org.apache.spark.sql.functions.*;必须为static 静态的 才可以使用其中的方法!