1. 多表join优化代码结构:
select .. from JOINTABLES (A,B,C) WITH KEYS (A.key, B.key, C.key) where ....
关联条件相同多表join会优化成一个job
2. LeftSemi-Join是可以高效实现IN/EXISTS子查询的语义
SELECT a.key,a.value FROM a WHERE a.key in (SELECT b.key FROM b);
(1)未实现Left Semi-Join之前,Hive实现上述语义的语句是:
SELECT t1.key, t1.valueFROM a t1
left outer join (SELECT distinctkey from b) t2 on t1.id = t2.id
where t2.id is not null;
(2)可被替换为Left Semi-Join如下:
SELECT a.key, a.valFROM a LEFT SEMI JOIN b on (a.key = b.key)
这一实现减少至少1次MR过程,注意Left Semi-Join的Join条件必须是等值。
3. 预排序减少map join和group by扫描数据HIVE-1194
(1)重要报表预排序,打开hive.enforce.sorting选项即可
(2)如果MapJoin中的表都是有序的,这一特性使得Join操作无需扫描整个表,这将大大加速Join操作。可通过
hive.optimize.bucketmapjoin.sortedmerge=true开启这个功能,获得高的性能提升。
set hive.mapjoin.cache.numrows=10000000; set hive.mapjoin.size.key=100000; Insert overwrite table pv_users Select /*+MAPJOIN(pv)*/ pv.pageid,u.age from page_view pv join user u on (pv.userid=u.userid;
(3)Sorted Group byHIVE-931
对已排序的字段做Group by可以不再额外提交一次MR过程。这种情况下可以提高执行效率。
4. 次性pv uv计算框架
(1)多个mr任务批量提交
hive.exec.parallel[=false]
hive.exec.parallel.thread.number[=8]
(2) 一次性计算框架,结合multi group by
如果少量数据多个union会优化成一个job;
反之计算量过大可以开启批量mr任务提交减少计算压力;
利用两次group by 解决count distinct 数据倾斜问题
Set hive.exec.parallel=true; Set hive.exec.parallel.thread.number=2; From( Select Yw_type, Sum(case when type=’pv’ then ct end) as pv, Sum(case when type=’pv’ then 1 end) as uv, Sum(case when type=’click’ then ct end) as ipv, Sum(case when type=’click’ then 1 end) as ipv_uv from ( select yw_type,log_type,uid,count(1) as ct from ( select ‘total’ yw_type,‘pv’ log_type,uid from pv_log union all select ‘cat’ yw_type,‘click’ log_type,uid from click_log ) t group by yw_type,log_type ) t group by yw_type ) t Insert overwrite table tmp_1 Select pv,uv,ipv,ipv_uv Where yw_type=’total’ Insert overwrite table tmp_2 Select pv,uv,ipv,ipv_uv Where yw_type=’cat’;
5. 控制hive中的map和reduce数
(1)合并小文件
set mapred.max.split.size=100000000; set mapred.min.split.size.per.node=100000000; set mapred.min.split.size.per.rack=100000000; set hive.input.format= org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
hive.input.format=……表示合并小文件。大于文件块大小128m的,按照128m来分隔,小于128m,大于100m的,按照100m来分隔,把那些小于100m的(包括小文件和分隔大文件剩下的),进行合并,最终生成了74个块
(2)耗时任务增大map数
setmapred.reduce.tasks=10;
6. 利用随机数减少数据倾斜
大表之间join容易因为空值产生数据倾斜
select a.uid from big_table_a a left outer join big_table_b b on b.uid = case when a.uid is null or length(a.uid)=0 then concat('rd_sid',rand()) else a.uid end;