When creating the Influxdb database, we can see the following options, each of which means what this article describes:
Influxdb the storage of internal data can use different storage engines. The current 0.8.7 version supports LEVELDB, ROCKSDB, Hyperleveldb, and LMDB.
These databases are all KV type databases with the following information:
LevelDB is a very efficient KV database implemented by Google, and the current version 1.2 is capable of supporting billion levels of data.
LevelDB is a single-process service with very high performance on a 4-core Q6600 CPU machine, with write data exceeding 40w per second, while random read performance exceeds 10w per second.
Random read here is the speed of full hit memory, if the speed of the misses is greatly reduced
The LevelDB is just a library of a/C + + programming language that does not contain a network service encapsulation, so it cannot be connected to the client as a general-purpose storage server (such as MySQL). LevelDB itself also states that users should encapsulate their own web servers.
Rocksdb is an embedded Key-value storage system from Facebook that can be used as a storage database in the C/S mode, but the main purpose is embedded. Rocksdb is built on LevelDB.
HYPERLEVELDB is a data storage engine developed by Hyperdex that improves LevelDB from Google to meet hyperdex business needs.
Hyperleveldb mainly improved on the LevelDB:
1. Improved parallelism, using finer-grained internal lock controls to provide high throughput for multi-writer threads
2. Improved data compression
LMDB is a fast and small key-value data storage service that was developed by the Symas of the OpenLDAP project. The memory-mapped file is used, so the read performance is the same as in the memory database. Its size is limited by the size of the virtual address space.
Influxdb officially tested the three engines and found that Rocksdb had good performance, so Influxdb's default storage engine was ROCKSDB.
INFLUXDB data storage can support multi-fragment storage, each fragment can be a storage engine, such as a database can have multiple fragments.
Each fragment store has the following properties, corresponding to the contents of the above diagram:
{ "name": "High_precision", "database": "pauls_db", "Retentionpolicy": "7d", "shardduration": "1d" , "regex": "/^[a-z].*/", "Replicationfactor": 1, "split": 1}
In the configuration parameters, we can see "database": "pauls_db" indicates that each fragment storage can only belong to a specific database, a database may have more than Shard Space.
"Retentionpolicy": "7d" means the time the data is saved (minimum save time), the figure of Retention is this, is the system interface, the setting of this time, the INF flag is permanent.
"Shardduration": "1d", indicating how long it takes to clean up.
The value of shardduration should be less than retentionpolicy, which is greater than the value of group by time () when we query.
The above configuration Example "Retentionpolicy": "7d", "shardduration": "1d", will cause us to save 7-8 days of data, every day will be cleaned up, 7 days before the data cleaned out.
"Replicationfactor": 1, each storage fragment is saved to several server settings;
"Split": 1 the number of storage fragments within a given time interval.
注意,这里有下面一个隐含的关系: replicationFactor * split == 服务器的数量。
数据被分配到那个碎片空间是基于下面的算法:
- Look up the Shard spaces for the InfluxDB database
- Loop through the spaces and use the first one that matches the series name
- Lookup the shards for the given time interval
- If no shards exist, create N shards for the interval based on split
- Assign the data to a given shard in the interval using the algorithm hash (series_name)% N
The best practice for using shard spaces is to write high-precision, big-data data in a shard spaces per time period. Synthesize them again when used.
Resources:
Influxdb Storage Engines
Http://influxdb.com/docs/v0.8/advanced_topics/sharding_and_storage.html
INFLUXDB's storage Engine