One of the basic principles of hadoop: mapreduce

Source: Internet
Author: User


1. Why hadoop?

Currently, the size of a hard disk is about 1 TB, and the read speed is about 100 Mb/s. Therefore, it takes about 2.5 hours to complete the reading of a hard disk (the write time is longer ). If data is stored on the same hard disk and all data needs to be processed by the same program, the processing time of this program will be mainly wasted on I/O time.

In the past few decades, the reading speed of hard disks has not increased significantly, and the network transmission speed has increased rapidly.

Therefore, if data is distributed to multiple hard disks for storage (for example, 100 copies are stored on 100 hard disks), the time required to read data is greatly reduced, the processed results of each node are transmitted over the network.

But this will cause two problems.

(1) data is dispersed into multiple hard disks, and the possibility of data errors caused by a disk failure is greatly increased, therefore, you need to copy and back up data ======>> HDFS !!

(2) Data is distributed across multiple disks and is generally processed locally ,, how to merge processing results ==============>> mapreduce !!!


2. Basic Node

Hadoop has the following five types of nodes:

(1) jobtracker

(2) tasktracker

(3) namenode

(4) datanode

(5) secondarynamenode


3. Fragmentation theory

(1) hadoop divides mapreduce input into fixed-size slices, which are called input split. In most cases, the slice size is equal to the HDFS block size (64 MB by default ).

(2)


4. Local data is preferred

Hadoop tends to perform Map Processing on the nodes that store data, which is called Data Locality optimization.


(1) first, hadoop tends to store data on local nodes.

(2) If other tasks are being processed on the data storage node, search for another node in the rack to process data.

(3) If tasks are being processed on all nodes in the rack, search for nodes in other racks for data processing.


5. mapreduce Data Flow

(1) single Reducer


(2) Multi-CER



6. combiner

The combiner function can be added between map and reduce to pre-process the results produced by map.




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.