Ttserver deployment methods
Ttserver can be understood as a key-value database. When deploying ttserver, you can select different deployment policies based on traffic and data scale.
For detailed startup parameters, see: http://blog.csdn.net/xifeijian/article/details/37744131
1. Single Machine: The data volume is small and the traffic volume is small
Ttserver-host 192.168.1.110-port 1978-thnum 128-DMN-ulim 1024 M-ulog/home/openpf/tmp/test_data/ulog_01-log/home/openpf/tmp/test_data/log /data_01.log-PID/home/openpf/tmp/test_data/log/data_01.pid-Sid 1/home/openpf/tmp/test_data/data_01.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m
If you want to improve the query performance, you can set the hash to a greater value, and then cache more records. In addition, setting the-UAS parameter to write logs asynchronously improves the write performance. However, this may cause log loss and reduce data security.
2. One master and one slave: The data volume is small, the traffic volume is small, and there are requirements for data security.
Master server: (same as above)
Ttserver-host 192.168.1.110-port 1978-thnum 128-DMN-ulim 1024 M-ulog/home/openpf/tmp/test_data/ulog_01-log/home/openpf/tmp/test_data/log /data_01.log-PID/home/openpf/tmp/test_data/log/data_01.pid-Sid 1/home/openpf/tmp/test_data/data_01.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m
Backup Server: started on another machine without caching
Ttserver-host 192.168.1.111-port 1979-mhost 192.168.1.110-mport 1978-RCC-RTS/home/openpf/tmp/test_data/data_01.rts-thnum 5-DMN-ulim 1024 M-ulog/home /openpf/tmp/test_data/ulog_02-log/home/openpf/tmp/test_data/log/data_02.log-PID/home/openpf/tmp/test_data/log/data_02.pid-Sid 2/ home/openpf/tmp/test_data/data_02.tch # bnum = 10000000 # rcnum = 0 # xmsiz = 0 m
Backup servers can find old servers with low performance. This setting is only used to ensure data security.
3. Mutual Active/Standby: as the data volume increases and the access volume increases, data security is required and single point of failure (spof) can be avoided.
First Server:
Ttserver-host 192.168.1.110-port 1978-mhost 192.168.1.111-mport 1979-RCC-RTS/home/openpf/tmp/test_data/data_01.rts-thnum 128-DMN-ulim 1024 M-ulog/home /openpf/tmp/test_data/ulog_01-log/home/openpf/tmp/test_data/log/data_01.log-PID/home/openpf/tmp/test_data/log/data_01.pid-Sid 1/ home/openpf/tmp/test_data/data_01.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m
Second Server:
Ttserver-host 192.168.1.111-port 1979-mhost 192.168.1.110-mport 1978-RCC-RTS/home/openpf/tmp/test_data/data_02.rts-thnum 128-DMN-ulim 1024 M-ulog/home /openpf/tmp/test_data/ulog_02-log/home/openpf/tmp/test_data/log/data_02.log-PID/home/openpf/tmp/test_data/log/data_02.pid-Sid 2/ home/openpf/tmp/test_data/data_02.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m
Applications can access any server for read and write operations. Once an application finds that a server cannot be accessed, it can immediately go to another server.
4. read/write Splitting: The write volume is soaring and the read volume is surging.
Tokyocabinet supports six data engines,The on-memory hash database and on-Memory B + database are used to store data in the memory, without data persistence.
Therefore, you can use the ttserver of the On-memory hash database engine as the write server, and then copy the data on-memory hash database to multiple ttservers of the hash database, then, multiple hash database ttservers are used as the read server. This actually improves performance by reducing consistency.
1) on-memory hash Database Configuration: the number of cached records and memory are configured very small (write ):
Ttserver-host 192.168.0.99-port 20000-thnum 128-DMN-ulim 1024 M-ulog/data/home/GAME/temp/test_data/ulog_01-log/data/home/GAME/temp /test_data/log/data_01.log-PID/data/home/GAME/temp/test_data/log/data_01.pid-Sid 0 "* # bnum = 10000000 # capnum = 100 # capsiz = 10 m"
2) configuration of hash database ttserver 1: Copy Data (read) from on-memory hash database ):
Ttserver-host 192.168.1.110-port 1978-mhost 192.168.0.99-mport 20000-RCC-RTS/home/openpf/tmp/test_data/data_01.rts-thnum 128-DMN-ulim 1024 M-ulog/home /openpf/tmp/test_data/ulog_01-log/home/openpf/tmp/test_data/log/data_01.log-PID/home/openpf/tmp/test_data/log/data_01.pid-Sid 1" /home/openpf/tmp/test_data/log/data_01.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m"
3) hash database ttserver 2 configuration: Copy Data (read) from on-memory hash database ):
Ttserver-host 192.168.1.111-port 1979-mhost 192.168.0.99-mport 20000-RCC-RTS/home/openpf/tmp/test_data/data_02.rts-thnum 128-DMN-ulim 1024 M-ulog/home /openpf/tmp/test_data/ulog_02-log/home/openpf/tmp/test_data/log/data_02.log-PID/home/openpf/tmp/test_data/log/data_02.pid-Sid 2" /home/openpf/tmp/test_data/log/data_02.tch # bnum = 10000000 # rcnum = 100000 # xmsiz = 256 m"
After the read/write splitting is configured, the client connects to the on-memory hash database when writing data, and other ttservers when reading data.
5. Distribution of clients, database sharding: massive data volumes
When the data volume is large and cannot be supported by several servers, database sharding is still helpless. Data is divided by business or by a certain number and stored in multiple ttserver groups.
When a client program writes or reads data, it automatically accesses the corresponding cluster according to the business rules.