An article to be recommended today, published in the blog of Cloudera, a well-known cloud storage provider, provides a detailed and illustrated explanation of several typical file structures of Hadoop and their previous relationships. Nosqlfan will translate the main content as follows (if there are errors and omissions, please correct): 1.Hadoop ' s Sequencefile
Sequencefile is an important data file type for Hadoop that provides key-value storage, but unlike traditional key-value storage (such as a hash table, btree), it is appendonly, So you can't write to a key that already exists. Each Key-value records the following figure, not only preserving the Key,value value, but also preserving their length.
Sequencefile has three compression states: uncompressed– not compressed state record compressed-compresses the value of each of the records (the file header contains information on which compression algorithm is used) block-compressed– when the amount of data reaches a certain size, will stop writing for the overall compression, the overall compression method is to put all the keylength,key,vlength,value together for the overall compression
The compressed state of the file is identified in the header data at the beginning of the file.
After the header data is a metadata data, he is a simple attribute/value pair that identifies some other information about the file. Metadata is written when the file is created, so it cannot be changed.
2.MapFile, Setfile, Arrayfile and Bloommapfile
Sequencefile is a basic data file format for Hadoop, followed by Mapfile, Setfile, Arrayfile, and bloommapfile based on it. mapfile– a key-value corresponding lookup data structure, which consists of a data file/data and an index file/index, which contains all the key-value pairs that need to be stored, arranged in the order of key. The index file contains a portion of the key value that points to the critical location of the data file. Setfile– is based on Mapfile, and he only key,value to immutable data. Arrayfile– is also based on Mapfile implementations, and he is just like the array we use, the key value is the serialized number. bloommapfile– He added a/bloom file based on Mapfile, which contained a binary filter table that was updated every time the write operation was completed.
Original link: Hadoop I/o: Sequence, Map, Set, Array, Bloommap Files
RELATED links:
1. Graphical understanding of HBase data write operation, compression operation process
2.HBase File Structure Chart
From:http://blog.nosqlfan.com/html/1217.html