巧用Hive自带函数进行多字段分别去重计数统计
1-group by 和 distinct
大前提:大表统计能使用
group by
就不要使用distinct
!!
尤其是在数据量非常大的时候,distinct会将所有的统计信息加载到一个reducer里取执行,这就是所谓的数据倾斜。而group by会把相同key的记录放到一个reducer区计算,因此效率会提高很多。
业务需要对一个分区内一百亿记录进行多个字段的去重统计。本着以上原则写出了以下代码:
SET hive.map.aggr=TRUE;
SET hive.optimize.skewjoin = TRUE;
SET hive.groupby.skewindata=TRUE;
SET mapreduce.input.fileinputformat.split.minsize=256000000;
SET mapreduce.input.fileinputformat.split.maxsize=512000000;
SET mapreduce.input.fileinputformat.split.minsize.per.node=512000000;
SET mapreduce.input.fileinputformat.split.minsize.per.rack=512000000;
SET hive.hadoop.supports.splittable.combineinputformat=TRUE;
--Group by及文件压缩优化CREATE TABLE IF NOT EXISTS statis (date string, accessn STRING, accessnum BIGINT, intercept STRING, interceptnum BIGINT, customn string, customnum BIGINT, ipn STRING, ipnum BIGINT, hostn STRING, hostnum BIGINT, uan STRING, uanum BIGINT);INSERT INTO TABLE statis
SELECT *
FROM(SELECT date,"访问:",COUNT(1)FROM log1WHERE ldc = 'XXXX') ACCESS
JOIN(SELECT "拦截:",COUNT(1)FROM log2) intercept ON (1=1)
JOIN(SELECT "IP量:",count(1)FROM(SELECT 1FROM log1WHERE ldc='XXXX'GROUP BY split(xff,",")[0]) ip) ips ON (1=1)
JOIN(SELECT "域名:",count(1)FROM(SELECT 1FROM log1WHERE ldc='XXXX'GROUP BY host) host) hosts ON (1=1)
JOIN(SELECT "UA量:",count(1)FROM(SELECT 1FROM log1WHERE ldc='XXXX'GROUP BY ua) ua) uas ON (1=1);
然后放到集群取执行,结果跑了两个小时 ,容我静静。。。
其实已经考虑到存在因数据不均匀造成的数据倾斜问题,因此提前做了Group by 优化,但数据量确实不小耗时太久。这是突然想起一种奇技淫巧。。。
2-巧用collect_set进行去重统计
Hive中函数collect_set()的作用是将多个查询结果去重
后形成一个数组,一般与group by进行结合使用。
这里我突然想到使用size()函数和collect_set()相结合,达到多个字段去重后统计的效果。原理是将去重后的结果放到一个reducer里进行统计计数,这样避免了多次的group by和join操作,进而大大提升执行效率。代码如下:
INSERT INTO TABLE statis
SELECT *
FROM(SELECT date,"访问:",COUNT(1)FROM log1WHERE ldc = 'XXXX') ACCESS
JOIN(SELECT "拦截:",COUNT(1)FROM log2) intercept ON (1=1)
JOIN(SELECT "用户:",size(collect_set(member)),"IP量:",size(collect_set(split(xff,',')[0])),"域名:",size(collect_set(host)) ,"UA量",size(collect_set(ua))FROM log1WHERE ldc='XXXX') couns ON (1=1);
小提示:使用该方法的前提是去重后的结果不能过多,否则对一个reducer来说难免又造成数据倾斜。