Hadoop shuffle stage Process Analysis

Source: Internet
Author: User
Hadoop shuffle stage Process Analysis mapreduce LongTeng 9 months ago (12-23) 399 browse 0 comments

At the macro level, each hadoop job goes through two phases: MAP Phase and reduce phase. For MAP Phase, there are four sub-stages: read data from disk-Execute map function-combine result-to write the result to the local disk; for reduce phase, it also contains four sub-stages: Read the corresponding data (shuffle)-"sort-" execute reduce function-"and write the result to HDFS.

(Note: The shuffle phase described in this article is very rough. If you want to know the shuffle implementation details and the current mainstream optimization methods, read my latest book hadoop technology Insider: in-depth analysis of mapreduce Architecture Design and Implementation Principles (purchase instructions) Chapter 8th "task operation process analysis" and section 8.5.2 "system optimization ")

The two sub-stages in the hadoop processing process seriously reduce its performance. First, the intermediate results generated in the map stage should be written to the disk. The main purpose of this operation is to improve the system reliability, but the cost is to reduce the system performance. In fact, the release version of hadoop-mapreduce online removes this stage, while other more efficient methods are used to improve system reliability (see references [1]); the other is that the shuffle phase uses the HTTP protocol to remotely copy results from various map tasks. This design concept (Remote copy and HTTP protocol) also reduces the system performance. In fact, Baidu is trying to replace this part of code with C ++ code to improve performance (see reference [2]).

This article first focuses on the specific process of shuffle stage, then analyzes the reasons for its inefficiency, and finally provides possible improvement methods.

Each reduce task has a background process getmapcompletionevents, which obtains the list of completed tasks passed in heartbeat (from jobtracker, and save the data location information corresponding to the reduce task to maplocations. The data location information in maplocations is filtered and de-duplicated (the same location information for some reason, may be sent multiple times) and then saved to the set scheduledcopies. Then, several copy threads (five by default) copy data through HTTP in parallel, at the same time, the threads inmemfsmergethread and localfsmerger merge and sort the copied data.

There are two major impacts on the performance of the shuffle phase: (1) Full Remote Data Copying (2) Data Transmission Using HTTP protocol. For the first aspect, if you adopt a certain policy (modifying the framework), you can use locality for reduce tasks. For the second aspect, replace HTTP with a new faster data transmission protocol, it may be faster, for example, the UDT protocol (see references [3]). It implements sector/sphere in another open-source C ++ platform of mapreduce (see references [4]). is used, the effect is good!

 

Learn to reprint, original link: http://blog.csdn.net/lihm0_1/article/details/17026251

Hadoop shuffle stage Process Analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.