Mapreduce working principle graphic explanation (refining data into gold)

Source: Internet
Author: User

Tag: blog HTTP Java strong file data 2014 SP code

Mapreduce Working Principles 

1. Map-reduce working mechanism analysis diagram:

 

1. first, the first step is to write our map-Reduce program and submit it in a client node. (generally, any node in the hadoop cluster can be used as long as hadoop is installed on the node and connected to the hadoop cluster)

 

2. After receiving this request, the job client will find jobtracker and request a job ID ). (Jobtracker can be easily found based on our core configuration file)

 

3. Run the HDFS system to distribute the code of this job,

 

4. Submit a job

 

5. initialize the job on the jobtracker side. For example, you can set up a series of data structures in its memory to record the running status of the job and put it in a job queue, wait for the job scheduler to schedule the job.

 

6. jobtracker will ask the HDFS's namenode about the files in which the data is stored, and then the files are scattered in which nodes. We will learn about the situation separately. because this job is between the map-Reduce program and the data, it is "Running nearby", that is, the program must be placed with the data to be processed, so, this information is required for running this job.

 

7. the heartbeat relationship between jobtracker and tasktracker is cleared once every minute. You can know which tasktracker can be involved in our calculation. for example, this tasktracker should not be down first, but it is alive. in addition, its compliance should be relatively low. if it is running other jobs. it is not suitable to add new jobs to him when it is busy. It is best to have a free node,

 

8. determine the tasktracker to run, that is, which tasktracker can be used in our map-reduce calculation. So, the tasktracker will fetch the relevant Java code from HDFS. After obtaining the code, it will start to set up the Java Virtual Machine, it is to run its Java Virtual Machine locally. then run the job.

 

The general process is like this.

 

The above content is from: refining the data into gold tutorial.

 

Mapreduce working principle graphic explanation (refining data into gold)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.