By default, Elasticsearch uses a JSON string to indicate that the body of the document is saved in a _source
field. Like other saved fields, fields are _source
compressed before they are written to the hard disk. The _source is stored as a binary blob (which are compressed by Lucene with deflate or LZ4) actually multiple _source merged into one chunk for LZ4 compression !
For SOLR: The format of the FDT and FDX used in Solr4.8.0 is lucene4.1. In order to improve the compression ratio, Storedfieldsformat compressed the document in 16KB, the compression algorithm used is LZ4, because it is more focused on speed than compression ratio, so it can quickly compress and decompress.
The format of the Fdx/fdt file.
Specific reference lucene41storedfieldsformat.html (see Lucene4.2.0 's Docs)
FDT File Structure:
It is not difficult to understand,Chunk's Chinese meaning is "chunk", which we can understand as the storage area of the data. behaves as a cache in memory. A chunk consists of 5 parts: Docbase represents the starting docid of the current chunk block; Chunkdocsrepresents the number of doc in the current chunk ; Docfieldcounts is an array, Indicates the number of field in each doc; Doclengths is also an array that represents the number of bytes each doc occupies, that is, the doc's length;<compresseddocs> the contents of doc, compressed with the LZ4 algorithm. Fieldnumandtype is merging FieldNumber and FieldType into a vlong field, and the entire <CompressedDocs> is the alternating sequence of fieldnumandtype and value.
FDX File Structure:
The FDX file focuses on <block> a Block consists of three parts: Blockchunks represents the number of chunk in the current block;<docbases> the number of doc per chunk in the current block, Can be thought of as an array;<startpointers> represents the starting position of each chunk in the current block in the Fdt file, with the same structure as <DocBases>.
Although the Fdx/fdt file is only the forward file of Lucene, it is not the core of Lucene. But there is still dry goods. In Lucene4, the LZ4 algorithm was introduced to compress/decompress the FDT doc in real time. And the architecture is reconstructed with SPI (Service Provider Interface) technology.
1.3 writes to the FDX/FDT file.
The write operation of the Fdx/fdt file is very clear. is logically done in the Compressingstoredfieldswriter class, and Compressingstoredfieldsindexwriter as its member variable. The order in which they are written is consistent with the above format, except that some names are different. In the process of writing to docs, use Growablebytearraydataoutput as the cache until the cache is full before you flush to the hard disk. Compression using the LZ4 algorithm is handled when flush. (about the LZ4 algorithm will be described in another blog post)
Write to the Fdt file:
The basic unit of the FDT file is chunk, which needs to be kept in mind. The code for a chunk write to the file is as follows:
By observing the flush function, we find that the Fdt file is very simple to write, with two lines of code:
The previous line of code records the entire chunk in docbase (minimum docid), Numbuffereddocs (Doc number), Numstoredfields (Number of field per Doc), lengths (length per Doc), There are four kinds of information. When recording numstoredfields and lengths, the content is compressed in packedints and other ways. The following code records the full contents of the doc in the entire chunk (compressed with the LZ4 algorithm).
Lucene LZ4 will store the doc in a chunk for Lz4 compression es _source so