Read file:
is the process by which HDFs reads files:
Here is a detailed explanation:
1. When the client begins to read a file, the client first obtains the Datanode information for the first few blocks of the file from Namenode. (steps)
2. Start calling read (), the Read () method, first to read the first time from the Namenode to obtain a few blocks, when the read is completed, then go to Namenode take a block of datanode information. (Step 3,4,5)
3. Call the Close method to complete the read. (Step 6)
What if something goes wrong when you read a block? The client will go to another best datanode to read this block and notify Namenode.
Such a complex sequence of processes is transparent to the client. The client knows only a steady stream of data being read from the stream.
Write file:
1. First client high-speed Namenode: "I want to create a file". Namenode will perform a series of checks, such as whether the file exists. Once you pass the check, create a file. The client can then write the data. (steps)
2. When the client begins writing data, it shards the data and then puts it into a queue. The Namenode then assigns a list of Datanode to the client to write to the data. Is three, which is three copies of data redundancy. Replication of data redundancy is done between datanode. (Step 3,4,5)
3. When the client receives a full write receipt (ACK packet), the data from the previous block in the queue is deleted. Then write to the next block.
Hadoop Learning Note--hadoop Read and write file process