[Hadoop Reading Notes] First chapter on Hadoop

Source: Internet
Author: User

P3-P4:

The problem is simple: the capacity of hard disk is increasing, 1TB has become the mainstream, however, data transmission speed has risen from the 1990 4.4mb/s only to the current 100mb/s

Reading a 1TB hard drive data takes at least 2.5 hours. Writing the data consumes more time. The workaround is to read from multiple hard drives, imagine that if there are currently 100 disks, each disk stores 1% data, then the parallel reads only need 2minutes to read all the data.

At the same time, parallel reading and writing data poses several problems:

1, a hardware failure-the use of data backup mechanism

2, the analysis task needs all nodes to complete together, the result is correct-MapReduce: The hard disk read and write problems into a data set calculation.

So, Hadoop provides us with a reliable shared storage and analytics system.

Reliable data storage by HDFS, and reliable data analysis and processing by Mr.

P6

P7

Data localization is a core feature of Mr, recognizing that replicating data everywhere is easy to consume network bandwidth, Mr tries to store data on computer nodes to achieve local fast access to data, and also improves computational performance.

P8

Mr can detect and re-execute failed m or r tasks when a calculation is required, but some of the nodes in the middle of the calculation fail.

This is because Mr employs a non-shared architecture, and each computing task is independent of each other, and it is easy to implement failure detection.

P9

The three main design goals of Mr are:

(1) Service for jobs that can take only a few minutes or hours to complete

(2) Inside the same internal data center with high-speed network connection

(3) machines in the data center are reliable, custom-built hardware

P12

April 2006 Run a sequencing test on 188 nodes (10GB each) takes 49.7 hours

May run a sequencing test on 500 nodes (10GB each) takes 42 hours

December on 20 nodes 1.8 hours, 100 nodes on 3.3 hours, 500 nodes on 5.2 hours, 900 nodes on 7.8 hours

In April 2008, in a cluster with 910 nodes, the order of 1TB data was completed in less than 3.5 minutes, making it the fastest terabytes of data sorting system.

In November, Google used 68s

In May 2009, Yahoo used 62s

Four components of Yahoo search engine:

1 Web Server Crawl page-Crawler

2 Build a known web page link graph-WebMap (link graph is very large, analysis takes several days)

3 Best Page Build reverse index-Indexer

4 Handling users ' queries-Runtime

P14-15

P15

New features for 2.x version

1. The new mr2.yarn built on yarn systems is a common resource manager for running distributed applications (yet another Resource negotiator)

2. HDFs Federated Management, the HDFs namespace is dispersed across multiple namenode to support clusters containing large-scale data files.

3. HDFs high availability, enable Secondnamenode to avoid Namenode single point of failure.

[Hadoop Reading Notes] First chapter on Hadoop

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.