This article is mainly translated from the cassandra1.2 document of Datastax. Objective: To reduce the number of sstables and Merge multiple sstables in the sequence of IOSStable: compression: In Cassandra, write new columns into the new sstable, therefore, compression is used to merge multiple sstables into one. Figure1: addingsstableswit
This article is mainly translated from the cassandra1.2 document of Datastax. Objective: To reduce the number of sstable and Merge multiple sstable sequential IO SStable formats: compression: In Cassandra, write new columns into the new sstable, therefore, compression is used to merge multiple sstables into one. Figure 1: adding sstables wit
This article is mainly translated from the cassandra1.2 document of Datastax.
Compression
Objective: To reduce the number of sstable instances
Merge multiple sstables
Sequential IO
SStable:
Compression:
In Cassandra, when new columns are written to the new sstable, the compression is to merge multiple sstable into one.
Figure 1: adding sstables with size tiered compaction
Therefore, after a period of time, many versions of a row will exist in multiple different sstables. Each of these versions may have different column sets. If sstable accumulates, it is necessary to locate multiple files multiple times to read a row of data.
Therefore, merging is required. Merging is also high-performance, and random I/O is not required, because rows are also stored in their sstable Order (based on the primary key order ).
Figure 2: sstables under size-tiered compaction after creating inserts
Cassnadra's size-based hierarchical compression policy is similar to that in bigtable: merge when a sufficient number of sstables are reached (4 by default.
A green grid represents an sstable, and a row represents a compression merge. Once sstable reaches 4, it is merged. Shows the hierarchy after a period of time. The sstable of the first layer is merged into the second layer, and the second layer is merged into the third layer...
There are three problems in frequently updated tasks:
1. performance will be inconsistent, because the number of sstables that a row spans cannot be guaranteed. The worst example is that we may have some columns in a row in each sstable.
2. Because it is difficult to determine whether an outdated column will be merged into multiple blocks, it may waste a lot of space, especially when there are many delete operations.
3. Space can also be a problem as sstables grow larger from repeated compactions, since an obsolete sstable cannot be removed until the merged sstable is completely written. in the worst case of a single set of large sstable with no obsolete rows to remove, Cassandra wowould need 100% as much free space as is used by the sstables being compacted, into which to write the merged one.
Cassandra1.0 and later introduced the Leveled compaction policy, which is based on the levelDB Compaction of the Chromium team)
Leveled compation creates a fixed size sstable (5 MB by default), which forms a "levels ". In each layer, sstables can ensure that they do not overlap. Each layer is 10 times larger than the previous layer.
Figure 3: adding sstables under leveled compaction
In, the new sstable first adds the first level, L0. then immediately merged into sstable to L1, (blue). When L1 is full, it is merged into L2 (purple ). Subsequent sstables generated in L1 will be compacted with the sstables in L2 with which they overlap. As more data is added, leveled compaction results in a situation like the one shown in figure 4.
Figure 4: sstables under leveled compaction after creating inserts
This method can solve the above problems:
1. This merge compression ensures that 90% of reads can be obtained from a single sstable (assuming that the row size is uniform ). The worst case is the number of read layers. For example, 10 TB of data will read 7.
2. As much as 10% of the space will be wasted due to outdated rows.
3. In compact, only 10 * sstable space is required for temporary use.
Usage: Add: compaction_strategy option set to LeveledCompactionStrategy when creating or updating the table structure. (The update is also in the background. Therefore, modifying the compact type for existing tables does not affect read/write operations)
As leveled compaction needs to ensure the above problem, it takes about twice the I/O ratio of size-tiered compation. For the write-dominated load, this extra io will not bring a lot of benefits because there are not many lines of old versions involved.
Settings: Leveled compaction ignores the concurrent_compactors setting. concurrent compaction is designed to avoid tiered compaction's problem of a backlog of small compaction sets becoming blocked temporarily while the compaction system is busy with a large set. leveled compaction does not have this problem, since all compaction sets are roughly the same size. leveled compaction does honor the multithreaded_compaction setting, which allows using one thread per sstable to speed up compaction. however, most compaction tuning will still involve usingcompaction_throughput_mb_per_sec (default: 16) to throttle compaction back.
When Will leveled compation be used,
Data Management
To manage and access data, you must know how Cassandra reads and writes data. hinted handoff features are consistent and inconsistent with ACID. In Cassandra, consistency refers to how to update and synchronize a row of data to all its copies.
To be continue...
Posted on