About HDFS
The Hadoop Distributed file system, referred to as HDFs, is a distributed filesystem. HDFs is highly fault-tolerant and can be deployed on low-cost hardware, and HDFS provides high-throughput access to application data, which is suitable for applications with large data sets. It has the following characteristics:
1) suitable for storing very large files
2) suitable for streaming data reading, which is suitable for "write once, read multiple" Data processing mode
3) Suitable for deployment on inexpensive machines
However, HDFs is not suitable for the following scenarios (everything must be viewed on both sides, only the technology that suits your business is really good Technology):
1) Not suitable for storing large amounts of small files due to namenode memory size limit
2) Not suitable for real-time data reading, high throughput and real-time is inconsistent, HDFs chooses the former
3) Not suitable for scenarios that require frequent data changes
HDFS Architecture
As shown in the architecture of HDFs, the Master/slave architecture is used in general and consists of the following 4 parts:
1. Client
2, NameNode
The entire HDFS cluster has only one namenode, which stores the metadata information for the entire cluster file separately. This information is stored in the local disk with Fsimage and editlog two files, and the client can find the corresponding files through these metadata information. In addition, Namenode is responsible for monitoring the health of the Datanode, kicking it out and copying the data to other Datanode once a datanode anomaly is found.
3, Secondary NameNode
Secondary Namenode is responsible for the periodic merger of Namenode Fsimage and Editlog. It is particularly important to note that it is not a hot standby for Namenode, so Namenode remains a single point of Failure. Its main purpose is to share part of Namenode's work (especially memory-consuming work, because memory resources are very valuable to namenode). In emergency situations, the recovery of Namenode can be assisted.
4, DataNode
Datanode is responsible for the actual storage of data, is the basic unit of file storage. When a file is uploaded to the HDFs cluster, it is distributed in blocks as the base unit in each datanode, and in order to ensure the reliability of the data, each block is written to multiple Datanode (the default is 3) Periodically, all existing block information is sent to Namenode.
HDFS Architecture Principles
1) Separation of metadata and data
2) master/slave architecture
3) write multiple reads at once
4) Mobile computing is more cost effective than mobile data
1, metadata and data separation
Reference article:
Http://www.open-open.com/lib/view/open1370958803132.html
http://blog.jobbole.com/34244/
Hadoop HDFS Architecture Design