Design Philosophy:
1. Large files
2. Stream Data Access
3. Commercial general hardware
Unsuitable scenarios:
1. Low-latency data access
2. A large number of small files
3. Multiple users write data and modify files at will
I. Basic concepts of HDFS
1.1. Data blocks)
HDFS (hadoop Distributed File System) uses 64 mb data blocks by default.
Similar to common file systems, HDFS files are divided into 64 mb data block storage.
In HDFS, if a file is smaller than the size of a data block, it does not occupy the entire data block storage space.
Objective: To minimize addressing and speed up Data Transmission
List the components of each file in the file system:
Hadoop fsck/-files-Blocks
1.2. Metadata node (namenode) and data node (datanode)
Namenode is used to manage the namespace of the file system and maintain all the files and directories in the entire tree.
Permanent storage: namespace image and edit log)
It also saves the data blocks included in a file and the data nodes distributed on it. However, this information is not stored on the hard disk, but collected from the data node at system startup and stored in the memory. Therefore, namenode requires a lot of memory.
Data nodes are the places where data is actually stored in the file system.
A client or a metadata node (namenode) can request to write or read data blocks from a data node.
The data node periodically returns the data block information it stores to the metadata node.
Slave metadata node (secondary namenode ):
The metadata node is not a slave node when a problem occurs on the metadata node. It is responsible for different tasks.
Its main function is to periodically merge the namespace image file of the metadata node with the modification log to prevent the log file from being too large. This is described in detail below.
The merged namespace image file is also saved from the metadata node ($ {DFS. name. dir}/image/fsimage) to prevent restoration when the metadata node fails.
1.2.1 metadata node folder structure
The version file is a Java properties file that saves the version number of HDFS.
Layoutversion is a negative integer that stores the format version number of the HDFS's continuous data structure on the hard disk.
Namespaceid is the unique identifier of the file system. It is generated when the file system is formatted for the first time.
Ctime: 0
Storagetype indicates that the data structure of the metadata node is saved in this folder.
1.2.2. File System namespace image files and modification logs
When the file system client performs write operations, it is first recorded in the edit log)
The metadata node stores the metadata information of the file system in the memory. After the modification log is recorded, the metadata node modifies the data structure in the memory.
Before each write operation is successful, the modified log is synchronized to the file system ($ {DFS. Name. dir}/current/edits ).
The fsimage file, that is, the namespace image file, is the checkpoint on the hard disk of the metadata (structure information of the directory tree maintained by namenode, excluding the location information of blocks) in the memory, it is a serialized format and cannot be directly modified on the hard disk.
Similar to the data mechanism, when the metadata node fails, the metadata information of the latest checkpoint is stored from fsimage (in secondary namenode, that is, $ {DFS. name. dir}/image/fsimage file) is loaded into the memory, and then the operations in the modification log are re-executed one by one.
The metadata node is used to help the metadata node to checkpoint the metadata information in the memory to the hard disk.
The checkpoint process is as follows:
The metadata node notifies the metadata node to generate a new log file, and all subsequent logs are written to the new log file.
Get the fsimage file and old log file from the metadata node using http get.
Load the fsimage file from the metadata node to the memory, perform operations in the log file, and generate a new fsimage file.
The new fsimage file is returned to the metadata node using http post.
The metadata node can replace the old fsimage file and the old log file with the new fsimage file and the new log file (generated in the first step), and then update the fstime file, the time when the checkpoint is written.
In this way, the fsimage file in the metadata node stores the latest checkpoint metadata information, and the log file starts again, so it will not become very large (careful readers can find that when the system is started, log File $ {DFS. name. the modification time of Dir}/current/edits will change and the size will decrease, which is because secondary namenode merges fsimage and edits ).
1.2.3 directory structure from metadata Node
1.2.4. directory structure of data nodes
The data node version file format is as follows:
BLK _ <ID> stores HDFS data blocks and stores specific binary data.
BLK _ <ID>. Meta stores the attribute information of the data block: version information, type information, and checksum.
When the number of data blocks in a directory reaches a certain value, a subfolder is created to save the data block and data block attribute information.
2. Data Flow)
2.1 File Reading Process
The client uses the open () function of filesystem to open a file.
Distributedfilesystem calls the metadata node using RPC to obtain the data block information of the file.
For each data block, the metadata node returns the address of the data node that saves the data block.
Distributedfilesystem returns fsdatainputstream to the client to read data.
The client calls stream's read () function to start reading data.
Dfsinputstream connects to the nearest data node that saves the first data block of this file.
Data reads data from the data node to the client)
When this data block is read, dfsinputstream closes the connection with this data node, connect to the nearest data node of the next data block of this file (someone may ask why different data blocks in the same backup of the same file may be stored on different datanode, yes, because there may be a datanode server or a datanode with insufficient space, in this way, the pipeline when writing data will be directed to another datanode so that the data block is saved to another location ).
When the client completes Data Reading, it calls the close function of fsdatainputstream.
During data reading, if the client fails to communicate with the data node, it tries to connect to the next data node that contains the data block.
Failed data nodes will be recorded and will not be connected later.
2.2 file Writing Process
The client calls create () to create a file.
Distributedfilesystem uses RPC to call metadata nodes and creates a new file in the file system namespace.
The metadata node first determines that the file does not exist, and the client has the permission to create the file, and then creates the new file.
Distributedfilesystem returns dfsoutputstream, which is used by the client to write data.
The client starts writing data. dfsoutputstream divides the data into blocks and writes the data queue.
Data queue is read by data streamer and notifies the metadata node to allocate data nodes to store data blocks (three data blocks are copied by default ). The allocated data nodes are placed in a pipeline.
Data streamer writes data blocks to the first data node in pipeline. The first data node sends the data block to the second data node. The second data node sends the data to the third data node.
Dfsoutputstream saves the ACK queue for the sent data block and waits for the data node in the pipeline to inform that the data has been written successfully.
If the data node fails to be written:
Close the pipeline and place the data blocks in the ACK queue to the beginning of the data queue (that is, re-write the unconfirmed data blocks ).
The current data block isMetadata NodeIf a new identifier is provided, the faulty node will be deleted after being restarted and its data blocks will become obsolete.
The failed data node is removed from the pipeline, and the other data blocks are written to the other two data nodes in the pipeline. At this time, only two data nodes are in the pipeline, but the data block writing is not affected, by default, HDFS considers the data written successfully as long as one backup is successfully written. In the future, the other two backups can be re-created from the backup ).
The metadata node is notified that the data block is insufficient to copy the data block. A third backup will be created in the future.
When the client ends writing data, the close function of stream is called. This operation writes all data blocks to the data nodes in the pipeline and waits until the ACK queue returns success. Finally, the metadata node is notified to have been written.
Reference: http://www.linuxidc.com/Linux/2012-06/62885p2.htm