The basic framework and working process of HDFS

Source: Internet
Author: User
Keywords Dfs name can through run

1. Basic composition structure and file access process

HDFs is a distributed file system built on the local filesystem of a set of distributed server nodes. The HDFS adopts the classic master-structure, whose basic composition is shown in Figure 3-1.

A HDFs file system consists of a master node Namenode and a set of Datanode from the node. Namenode is a master server that manages the namespace and metadata of the entire file system and handles file access requests from outside. Namenode preserves three metadata of the FileSystem: 1 namespace, the directory structure of the entire distributed file system, 2 data block and filename mapping table, 3 the location information of each data block copy, each data block by default has 3 replicas.

HDFs provides namespaces that allow the user's data to be stored in a file, but internally, the file may be partitioned into several pieces of data. Datanode a block of data that is used to actually store and manage files. Each block of data in the file has a default size of 64MB, and to prevent data loss, each block has 3 replicas by default, and 3 replicas are replicated on separate nodes to prevent a node from being invalidated, resulting in a complete loss of a block of data.

The data for each datanode is actually stored in the local Linux file system for each node.

File operations can be performed on Namenode, such as opening, closing, renaming, and Namenode is also responsible for allocating data blocks to datanode and establishing a corresponding relationship between blocks and Datanode. Datanode is responsible for dealing with the specific data read and write requests of the file system users, and can also handle the instructions of the Namenode to create and delete copies of the data blocks.

Namenode and Datanode programs can run on inexpensive, commercial servers. These machines generally run the GNU operating system. HDFs is written in the Java language, and the machines that support the JVM can run Namenode and datanode corresponding programs. Although it is generally a GNU system, HDFS can run on many other platforms because of the portability of Java. A typical HDFS deployment scenario is that the Namenode program runs on a single server node, the remaining server nodes, each running a datanode program.

Using a single namenode in a cluster can greatly simplify the architecture of the system. In addition, although Namenode is the sole owner of all HDFs metadata, when the program accesses the file, the actual file stream is not routed through the Namenode, but from Namenode to the desired http://www.aliyun.com/zixun/ Aggregation/11872.html "> accesses the storage location information of the data block, directly accesses the corresponding Datanode to obtain the data. This design has two points of benefit: One can allow a file of data can simultaneously on different datanode concurrent access, improve the speed of data access, two can greatly reduce the burden of namenode, to avoid making namenode become data access bottlenecks.

The basic file access procedure for HDFs is:

1 First, the user's application sends the file name to Namenode through a HDFS client program.

2 after receiving the filename, namenode the file name in the HDFs directory, and then find the Datanode address to save the block of data according to the block information, and send these addresses back to the client.

3 after receiving these Datanode addresses, the client carries out data transfer operations in parallel with these datanode, and submits the relevant log of the operation results (such as success, modified block information, etc.) to the Namenode.

2. Data block

In order to improve the efficiency of the hard disk, the smallest data-reading unit in the file system is not byte, but rather a larger concept-data block. However, data block information is transparent to the user, unless through special tools, it is difficult to see the specific data block information.

HDFs also has the concept of a block of data. However, unlike blocks of data in the general file system with a size of several KB, the default size of the HDFS block is 64MB, and in many practical deployments, the HDFS block is even set to 128MB or more, thousands of times times larger than the number of KB blocks on the file system.

The reason for setting the block to this size is the time to reduce the addressing overhead. In HDFs, when the application initiates the data transfer request, Namenode first retrieves the corresponding block information of the file, finds the corresponding datanode;datanode of the data block, finds the corresponding file in its storage according to the block information, and then exchanges the data with the application program. Because the retrieval process is running on a stand-alone computer, increasing the block size can reduce the frequency and time overhead of addressing.

3. Namespaces

File naming in HDFs follows the traditional "directory/subdirectory/file" format. You can create a directory from the command line or an API and save the file in a directory, or you can create, delete, and rename the file. However, links are not allowed in HDFs (both hard links and symbolic links are not allowed). Namespaces are managed by Namenode, and all changes to the namespace (including creating, deleting, renaming, or changing attributes, but excluding opening, reading, and writing data) are recorded by the HDFs.

HDFS allows users to configure the number of copies to be saved on HDFs, the number of copies saved is called the "Replica Factor" (Replication Factor), and this information is also stored in Namenode.

4. Communication protocol

As a distributed file system, most of the data in HDFs is transmitted over the network. In order to ensure the reliability of transmission, HDFs adopts TCP protocol as the underlying support protocol. The application can initiate a TCP connection to the Namenode. The protocol used to interact with Namenode is called the client protocol, and the Namenode and Datanode interaction protocols are called Datanode protocols (refer to other information for specific content of these protocols). The interaction between the user and the Datanode is accomplished by initiating a remote procedure call (a sqlremote Procedure call,rpc) and by a namenode response. In addition, Namenode does not initiate remote procedure call requests.

5. Client

Strictly speaking, the client is not a part of the HDFS, but the client is the most common and convenient channel for users and HDFs communications, and the deployed HDFS will provide the client.

The client provides a way for users to access HDFS data in a manner similar to that of a shell in Linux. The client supports the most common operations such as (open, read, write, and so on), and the format of the command is very similar to that of the shell, which greatly facilitates the programmer and administrator operations. Detailed command line operations are detailed in section 3.4.

In addition to command-line clients, HDFS provides client-side programming interfaces for accessing file systems during application development, as detailed in section 3.5 of the HDFS programming interface.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.