Sorting and principles of Hadoop job optimization parameters

Source: Internet
Author: User
Original? Www.blogjava.netwangxinsh55archive20141119420297.htmlwww.linuxidc.comLinux2012-0151615.htm 1Mapsidetuning parameter 1.1MapTask running internal principle when maptask start operation and generate intermediate data, the intermediate result is not directly simple

Original? Http://www.blogjava.net/wangxinsh55/archive/2014/11/19/420297.html http://www.linuxidc.com/Linux/2012-01/51615.htm 1 Map side tuning Parameter 1.1 MapTask running internal principle when map task starts operation and produces intermediate data, the intermediate result is not directly simple

Original? Http://www.blogjava.net/wangxinsh55/archive/2014/11/19/420297.html

Http://www.linuxidc.com/Linux/2012-01/51615.htm

1 Map side tuning Parameter 1.1 internal principle of MapTask operation

When map tasks start operations and generate intermediate data, the intermediate results are not directly written to the disk. The intermediate process is complicated, and some results have been cached using the memory buffer, and some pre-sorting is performed in the memory buffer to optimize the performance of the entire map. As shown in, each map corresponds to a memory buffer (MapOutputBuffer, that is, the buffer in memory). map writes some of the results produced to the buffer first, the buffer size is 100 MB by default, but this size can be adjusted according to the parameter settings when the job is submitted. this parameter is:Io. sort. mb. When the data generated by map is very large. sort. if the mb size is increased, the number of spill operations on the disk by map tasks will inevitably decrease during the entire computing process. If the bottleneck of map tasks is on the disk, this adjustment will greatly improve the computing performance of map. The memory structure of map for sort and spill is as follows:

During the running process, map constantly writes existing computing results to the buffer, but the buffer does not necessarily cache all map output, when the map output exceeds a certain threshold (such as 100 MB), the map must write the data in the buffer to the disk. This process is called spill in mapreduce. Map does not need to wait until the buffer is fully written, because if all the buffer is fully written and then spill is written, it will inevitably cause the computing part of map to wait for the buffer to release space. Therefore, when the buffer is fully written to a certain extent (such as 80%), the spill is started. This threshold is also controlled by the configuration parameters of a job, that isIo. sort. spill. percentThe default value is 0.80 or 80%. This parameter also affects the spill frequency and the disk read/write frequency of the map task operation cycle. However, in special cases, human adjustment is usually not required. It is more convenient to adjust io. sort. mb.

After all the calculations of the map task are completed, if the map has output, one or more spill files will be generated, which are the output results of the map. Before the map Exits normally, it needs to merge the spill (merge) into one, so there is a merge process before the map ends. In the merge process, there is a parameter that can adjust the behavior of this process. This parameter is:Io. sort. factor. The default value is 10. It indicates the maximum number of parallel streams that can be written to the merge file when the merge spill file is used. For example, if the data produced by map is very large, the generated spill file is larger than 10, and io. sort. factor uses the default value 10. When map computing completes merge, there is no way to split all the spill files into merge at a time, but multiple times, A maximum of 10 streams can be created at a time. This means that when the intermediate result of map is very large and I/O. sort. factor is increased, it is helpful to reduce the number of merge operations and the read/write frequency of map to the disk, which may achieve the goal of optimizing the job.

When a job specifies a combiner, we all know that after the introduction of map, the map results will be merged on the map end based on the functions defined by combiner. The time to run the combiner function may be before or after merge is completed. This time can be controlled by a parameter, that isMin. num. spill. for. combine(Default 3) when the combiner is set in the job and the spill number has at least three, the combiner function runs before the merge generates the result file. In this way, you can reduce the number of data written to the disk file when spill requires a lot of merge and a lot of data requires conbine, it is also to reduce the read/write frequency of the disk, which may achieve the goal of optimizing the job.

To reduce the number of methods for reading and writing intermediate results into and out of disks, there is also compression. That is to say, in the middle of the map, both the spill and the result file generated by merge can be compressed. The advantage of compression is that the data volume written to and read from the disk is reduced through compression. The intermediate results are very large, and the disk speed becomes the bottleneck for map execution, which is particularly useful. The following parameters are used to control whether the intermediate map result is compressed:Mapred. compress. map. output(True/false ). When this parameter is set to true, the map will compress the data before writing the data to the disk when writing the intermediate result. The data will be read after the data is decompressed. The consequence of doing so is that the intermediate data volume written to the disk will decrease, but the cpu will consume some data for compression and decompression. Therefore, this method is usually suitable for scenarios where the intermediate job results are very large and the bottleneck is not on the cpu, but on the disk. To put it bluntly, use cpu for IO. It is observed that most of the job CPUs are not bottlenecks unless the computing logic is very complex. Therefore, compressing intermediate results is usually beneficial. The following is a comparison of the data volume between the wordcount intermediate result compressed and the map intermediate result generated without compression local disk read/write:

MapThe intermediate results are not compressed:

MapCompress intermediate results:

We can see that the results of the same job and data can be reduced by nearly 10 times when compression is adopted. If the bottleneck of map is on the disk, the job performance will be improved significantly.

When the intermediate map result is compressed, you can also choose to compress ??? The compression formats supported by Hadoop include GzipCodec, LzoCodec, BZip2Codec, and LzmaCodec. Generally, LzoCodec is suitable for balanced cpu and disk compression ratios. But it also depends on the specific situation of the job. If you want to select the compression algorithm for the intermediate result, you can set the configuration parameters:Mapred. map. output. compression. codec= Org. apache. hadoop. io. compress. DefaultCodec or the compression method selected by other users.

1.2 Map side parameter optimization

Option Type Default Value Description
Io. sort. mb Int 100 Buffer size of the intermediate result of the cached map (in MB)
Io. sort. record. percent Float 0.05 Io. sort. mb is used to save the percentage of map output record boundaries, and other caches are used to save data.
Io. sort. spill. percent Float 0.80 The threshold for map to start the spill operation.
Io. sort. factor Int 10 The maximum number of streams simultaneously operated during the merge operation.
Min. num. spill. for. combine Int 3 Minimum number of spill Operations of the combiner Function
Mapred. compress. map. output Boolean False Whether the intermediate map result is compressed
Mapred. map. output. compression. codec Class name Org. apache. Hadoop. io.

Compress. DefaultCodec

Compression format of map intermediate results
2 Reduce side tuning Parameter 2.1 internal operating principle of cetcetask

Reduce operations are divided into three stages. Copy-> sort-> reduce. Each map of a job divides the data into map output results and n partitions Based on the reduce (n) number, therefore, the intermediate result of map may contain part of the data to be processed by each reduce. Therefore, in order to optimize the reduce execution time, hadoop waits until the first map of the job ends, all reduce workers start to try to download part of the partition data corresponding to the reduce from the completed map. This process is generally called the shuffle, that is, the copy process.

When a Reduce task is shuffle, it is actually downloading part of its own reduce data from different completed maps. Because there are many maps, for a reduce task, the download can also be downloaded from multiple maps in parallel. The degree of parallelism can be adjusted. The adjustment parameter is:Mapred. reduce. parallel. copies(Default 5 ). By default, each thread has only five parallel download threads in the data from the map. If there are 100 or more map Jobs completed in one time period, reduce can only download data of up to five maps at the same time. Therefore, this parameter is suitable for jobs with many maps and completed quickly, this helps reduce to quickly obtain data of its own part.

When downloading a map data, each download thread of reduce may encounter errors on the machine where the intermediate map result is located, lost files with intermediate results, or transient network disconnection, in this way, the downloading of reduce may fail, so the downloading thread of reduce will not wait endlessly. When the downloading still fails after a certain period of time, the downloading thread will give up this download, and then try to download from another place (because the map may re-run during this time ). Therefore, the maximum download period of the reduce download thread can be adjusted. The adjustment parameter is:Mapred. reduce. copy. backoff(Default 300 seconds ). If the network in the cluster environment is a bottleneck, you can increase this parameter to avoid downloading the reduce thread from being misjudged as a failure. However, when the network environment is good, there is no need to adjust it. Generally, professional cluster networks should not have too many problems, so this parameter needs to be adjusted.

When Reduce downloads the map result to a local machine, merge is also required, so io. sort. the configuration option of factor also affects the behavior of reduce during merge. The detailed introduction of this parameter has been mentioned above. When it is found that reduce is very high in the shuffle stage, it is possible to increase the concurrency throughput of a merge operation by increasing this parameter to optimize the reduce efficiency.

In the shuffle phase, Reduce writes the downloaded map data not immediately to the disk, but first caches the data in the memory, then, it is flushed into the disk only when the memory usage reaches a certain amount. This memory size control is not set like map through io. sort. mb, but through another parameter:Mapred. job. shuffle. input. buffer. percent(Default 0.7), this parameter is actually a percentage, which means that the maximum amount of data in the shuffile reduce memory is 0.7 × maxHeap of reduce task. That is to say, if the maximum heap usage of the reduce task is usually set through mapred. child. java. opts, for example, to-Xmx1024m, a certain proportion is used to cache data. By default, reduce uses 70% of its heapsize to cache data in the memory. If the reduce heap is greatly adjusted for business reasons, the corresponding cache size will also increase, which is why the reduce parameter used for caching is a percentage instead of a fixed value.

Assume mapred. job. shuffle. input. buffer. the percent value is 0.7, and the max heapsize of the reduce task is 1 GB. The memory used for downloading data cache is about 700 MB, which is the same as the memory of the map end, it is not necessary to wait until all the data is fully written to the disk, but when the 700 mb is used to a certain extent (usually a percentage), it will start to brush the disk. This threshold value can also be set through the job parameter. The set parameter is:Mapred. job. shuffle. merge. percent(Default 0.66 ). If the download speed is fast and the memory cache is easily increased, adjusting this parameter may help reduce performance.

When reduce downloads all the data corresponding to its own partition on the map, it starts the real reduce computing stage (there is an sort stage in the middle, which usually takes a very short time, it takes a few seconds to complete, because the entire download phase is already downloading side sort and then side merge ). When a reduce task actually enters the computing stage of the reduce function, a parameter can also be used to adjust the calculation behavior of the reduce function. That is:Mapred. job. reduce. input. buffer. percent(Default 0.0 ). Because reduce computing must also consume memory, and when reading the data required by reduce, the memory also needs to be used as the buffer. this parameter is used for control, the percentage of memory required to serve as the buffer percentage of data that has been read by reduce for sort. The default value is 0. That is to say, by default, reduce reads and processes all data from the disk. If this parameter is greater than 0, a certain amount of data will be cached in the memory and delivered to reduce. When the reduce computing logic consumes a large amount of memory, data can be cached in part of the memory, reduce memory is idle.

2.2 Reduce side parameter optimization
Option Type Default Value Description
Mapred. reduce. parallel. copies Int 5 Maximum number of threads that can be concurrently downloaded by each reduce.
Mapred. reduce. copy. backoff Int 300 Maximum wait time for reduce download threads (in sec)
Io. sort. factor Int 10 Same as above
Mapred. job. shuffle. input. buffer. percent Float 0.7 Percentage of reduce task heap used to cache shuffle data
Mapred. job. shuffle. merge. percent Float 0.66 The percentage of cached memory before performing merge operations
Mapred. job. reduce. input. buffer. percent Float 0.0 Percentage of data cached in the reduce computing phase after the completion of sort

This article from: http://blog.chedushi.com, original address: http://blog.chedushi.com/archives/9496, thanks to the original author to share.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.