Hadoop Learning record--hdfs File upload process source parsing

Source: Internet
Author: User

This section is not much of a talk about what Hadoop is, or the basics of Hadoop because it has a lot of detailed information on the Web, and here's what to say about HDFs. Perhaps everyone knows that HDFs is the underlying Hadoop storage module dedicated to storing data, so how does HDFs work when uploading files? We follow the macro and micro-related analytical work.

First, we need to explain the following concepts:

(1) Secondarynamenode: In fact, my understanding of SN is the same as most people, I think that SN is a real-time hot backup of NN (nameNode) to achieve ha, and in a written examination of the process is also wrong, awkward, Backstage after viewing related books found is not really the case, SN is mainly completed edits and Fsimage merge work to reduce the working pressure of NN. The understanding of the error is not unreasonable, and now Hadoop supports real-time backup of HA. Later chapters say.

(2) Fsimage and edits: although I do not want to say some concepts here, but should say. Simple explanation: Fsimage contains serialization information for all directories and file Idnode in the Hadoop file system, where the file contains the file's modification time, access time, block size, and a file block information. The information contained in the folder includes the modification time, access control permissions, and so on. The edits file is primarily a client-side record of file operations, such as uploading new files. And the edits file is periodically merged with the Fsimage file.

Macro-writing process:

The specific process is as follows:

1. First, the client will block the file, and the size of the block is the size given in the configuration file.

2. The client then notifies Namenode that the file is about to be uploaded, Namenode creates a directory, and allocates files based on the number of backups specified in the configuration file and the rack-aware principle, such as the number of backups 3 is stored by the rack-aware 3-cent file in the rack 1 host2, Rack 2 in Host1 and Host3.

3. The client establishes a stream transport channel with the specified Datanode, and transfers the file to the Host2 in Rack 1, and host2 the host1 in the rack 2 for file backup. Host2 the data blocks are copied to the host3 of the same rack after the backup is complete.

After 4.3 machines have completed this process, HOST2,HOST1,HOST3 to Namenode to announce that the files have been stored and completed. HOST2 also initiates notification to the client.

5. The client receives a HOST2 return request and notifies Namenode---has completed writing the data. The entire process runs.

Hadoop Learning record--hdfs File upload process source parsing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.