HDFs Learning Notes (1) on HDFs

Source: Internet
Author: User

Hadoop distributed FileSystem (Hadoop Distributed File System, HDFS)

A distributed File system is a file system that consents to file sharing on multiple hosts over a network. Allows multiple users on multiple machines to share files and storage space.
HDFs is just one of them. applies to the case of one write, multiple queries. Concurrent write scenarios are not supported. Small files are not appropriate.

2.HDFS Architecture

HDFs adopts Master/slave architecture. An HDFS cluster consists of a namenode and a certain number of datanodes. Namenode is a central server that manages file system namespaces (namespace) and client access to files.

A datanode in a cluster is usually a node that manages the storage on the node it resides on. HDFs exposes the file System namespace. Users can store data in the form of a file. Internally, a file is actually partitioned into one or more data blocks, which are stored on a set of Datanode. Namenode a namespace operation that runs the file system, such as opening, closing, renaming a file or folder. It is also responsible for determining the mapping of data blocks to detailed datanode nodes. Datanode is responsible for processing the read and write requests of the file system client. The creation, deletion and replication of data blocks under the unified dispatch of Namenode.

NameNode
    • is the management node for the entire file system.

      It maintains a file folder tree for the entire file system, meta-information for files/folders, and a list of corresponding data blocks for each file.

      Receives the user's action request.

    • Files include: (View Dfs.name.dir folder in detail)
      • Fsimage: Metadata image file.

        Stores Namenode memory metadata information for a certain period of time.

      • Edits: Operation log file.

      • Fstime: Save time for a recent checkpoint
    • These files are stored in the Linux file system.

DataNode
    • A storage service that provides real-world file data.
    • File Block: The primary storage unit. For file content, the length of a file is size. Then starting from the 0 offset of the file, according to the fixed size, the sequence of the file is divided and numbered, divided into a block of each chunk. The default block size for HDFs is 128MB. With a 512MB file, there are 4 blocks together.
    • Unlike the normal file system, HDFs. Assuming that a file is smaller than the size of a block of data, it does not occupy the entire block storage space
    • Replication. Multiple copies. The default is three.

Secondarynamenode
    • A solution for HA (high availability availability).

      But it does not support hot standby.

      Configuration is available.

    • Run Process: Download metadata information (fsimage,edits) from Namenode. Then merge the two. Generate a new fsimage. Save it locally and push it to Namenode, resetting Namenode's edits at the same time.
    • The default is installed on the Namenode node, but this ... Not Safe!
HDFS Read Process

1. Initialize the filesystem. The client then opens the file with the FileSystem open () function
2.FileSystem calls the metadata node with RPC. Gets the data block information for the file. For each data block, the metadata node returns the address of the data node that holds the data block.
3.FileSystem returns Fsdatainputstream to the client. Used to read data. Client calls the stream's read () function to begin reading the data.


4.DFSInputStream connect the recent data node that holds the first chunk of this file. Data is read from the node to the client (client)
5. When this chunk reads is complete. Dfsinputstream closes the connection to this data node, and then connects the next data node of this file with the recent data block.
6. When the client reads the completion data, call Fsdatainputstream's close function.


7. In the process of reading the data. Assuming that the client has an error communicating with the data node, it attempts to connect to the next data node that includes the data block.
8. The failed data node will be logged. No longer connected later.

HDFs Write Process

1. Initialize filesystem,client call Create () to make a file
2.FileSystem uses RPC to call the metadata node to create a new file in the file system's namespace. The metadata node first determines that the file originally does not exist. And the client has permission to create the file, and then creates a new file.
3.FileSystem returns Dfsoutputstream. The client writes the data, and the client starts writing the data.


4.DFSOutputStream divides the data into chunks and writes it to a data queue.

The data queue is read by the data streamer. and notifies the metadata node that the data node is assigned. Used to store blocks of data (each block is duplicated by default of 3 blocks). The assigned data node is placed in a pipeline. Data Streamer writes a block to the first data node in the pipeline. The first Data node sends a block of data to the second data node. The second data node sends the data to a third data node.
5.DFSOutputStream saves the ACK queue for the emitted data block, waiting for the data node in the pipeline to tell that the data has been successfully written.


6. When the client finishes writing the data, call the stream's close function. This operation writes all blocks of data to the data node in pipeline and waits for the ACK queue to return successfully. Finally notifies the metadata node that the write is complete.


7. Assuming that the data node fails during the write process, closes the pipeline, places the data block in the ACK queue at the beginning of the data queue, and the current data block is given a new flag by the metadata node in the already-written node, then the wrong node restarts to realize that its data block is obsolete , it will be deleted. The failed data node is removed from the pipeline, and the other data block is written to the other two data nodes in the pipeline. The metadata node is notified that the block is insufficient in number of copies and will create a third backup in the future.


Official documents
    • HDFS Users Guide
    • HDFS Architecture

HDFs Learning Notes (1) on HDFs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.