Anyone familiar with Jobtracker knows that during Job initialization, EagerTaskInitializationListener locks JobInProgress and then performs InitTask. For details, please refer to the code. One step here is to write initial data to hdfs and flush it, while the Update Thread of Fairscheduler updates Resources in the resource pool, it holds the exclusive lock of JobTracker and Fairscheduler and then computes the resources of each resource pool, when calculating running_map/running_reduce, You need to obtain the corresponding JobInProgress lock. Readers may not understand why I want to talk about this. The problem is here.
When processing dynamic partitions, Hive mainly goes through the following steps: tablescan-> filesink-> movetask
When filesink is performed, it is processed according to the record, and Npart will be initiated) record writer and then start to process dynamic partition fields, that is, the dt here, if dt is continuous, open a block to start writing. Otherwise, close the current block and open the block of the new dir to continue writing, in this case, if dt is not continuous and the number of records is large, a large number of files will be generated, resulting in hdfs load elevation, which matches with the current hdfs monitoring:
Current Cluster load:
Number of files generated at that time:
In this case, JobInProgress is locked, and JobTracker is locked. As a result, JobTracker Hang is suspended!
How can this problem be solved? Using distributeby dt to sort the same dt together and then perform filesink will not produce a large number of small files.