How to control the number of maps in Hadoop

Source: Internet
Author: User
Reprinted from: how to control the number of map in hadoop provides a mapred. map. tasks parameter for setting the number of maps. We can use this parameter to control the number of maps. However, setting the number of maps in this way is not effective every time. The reason is that mapred. map. tasks is only a reference value for hadoop.

Reprinted from: how to control the number of map in hadoop provides a mapred. map. tasks parameter for setting the number of maps. We can use this parameter to control the number of maps. However, setting the number of maps in this way is not effective every time. The reason is that mapred. map. tasks is only a reference value for hadoop.

Reprinted from: how to control the number of map in hadoop provides a mapred. map. tasks parameter for setting the number of maps. We can use this parameter to control the number of maps. However, setting the number of maps in this way is not effective every time. The reason is that mapred. map. tasks is only a reference value for hadoop. The number of maps ultimately depends on other factors. For the sake of convenience, let's take a look at several terms: block_size: hdfs file block size. The default value is 64 mb. You can use the dfs parameter. block. size setting total_size: total size of the input file input_file_num: Number of input files (1) default number of maps. If no value is set, the default number of maps is related to blcok_size. Default_num = total_size/block_size; (2) the expected size can be calculated using the mapred parameter. map. task is used to set the number of map tasks that the programmer expects, but this number takes effect only when it is greater than default_num. Goal_num = mapred. map. tasks; (3) set the size of the processed file through mapred. min. split. size indicates the file size processed by each task. However, this size takes effect only when it is larger than block_size. Split_size = max (mapred. min. split. size, block_size); split_num = total_size/split_size; (4) compute_map_num = min (split_num, max (default_num, goal_num) in addition to these configurations, mapreduce also follows some principles. The data processed by each map in mapreduce cannot span files, that is, max_map_num <= input_file_num. Therefore, the final number of maps should be: final_map_num = min (compute_map_num, input_file_num). After the above analysis, the number of maps can be summarized as follows: (1) if you want to increase the number of maps, set mapred. map. task is a large value. (2) If you want to reduce the number of maps, set mapred. min. split. size to a large value. (3) If there are many small files in the input and you still want to reduce the number of map files, you need to split the small file merger into a large file and then use Criterion 2.

Original article address: how to control the number of maps in Hadoop, thanks to the original author for sharing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.