Link:
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/1511354Q6-0.png "title =" 1.png" alt = "122535564.png"/>
Continue to check and find that the number of recent waiting_maps is very match with the number of spike
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/1511353404-1.png "title =" 2.png" alt = "122557341.png"/>
Then, we can use grace to locate a specific business Hive cleaning Job:
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/1511352635-2.png "title =" 3.png" alt = "122613238.png"/>
It is found that this Hive Table A has A large number of small files, the maximum size is 16 MB). This table A is generated by using the insert select method for an external table B, while hive. merge. smallfiles. avgsize is the default value 16 MB), so after the build, the map is only the maximum merge to 16 MB. merge. smallfiles. change avgsize to dfs. block. size. At the same time, set mapred is added to the Job of the current query Table. min. split. size = dfs. block. size * 2; setmapred. min. split. size. per. node = dfs. block. size * 2; set mapred. min. split. size. per. rack = dfs. block. size * 2 re-run, which greatly reduces the number of maps without reducing the running time;
This article is from "MIKE's old blog" blog, please be sure to keep this source http://boylook.blog.51cto.com/7934327/1298651