Ttserver: focuses on data security

Source: Internet
Author: User
Today, a friend asked me to say that the copied tch file did not read many records during traversal.
Read the source code of tchdb. C. The traversal implementation starts from the first record of the file and traverses according to each record block. Therefore, such traversal is actually a snapshot of all records. If the server is frequently added or deleted at that time, traversal cannot accurately obtain all records.

This friend mentioned that the TCH file was copied in Linux using CP and is not online. When backing up a database, copying tch files directly cannot completely read data. The correct method is to use the backup tool provided by tokyotyrant to back up the data. When a tool is used for backup, the database is locked and exclusive to ensure data integrity.

From this point on, when you shut down the ttserver, do not use kill-9 to kill the ttserver process. You must use kill-15 or kill-term to notify ttserver to automatically shut down. After receiving the signal, the ttserver synchronizes all the data in the memory to the disk and then exits. Otherwise, if the data is killed instantly, a block error may occur, causing permanent damage to the TCH file.

therefore, when using ttserver, remember that ttserver is far more vulnerable than mysql. For important data, pay attention to data security.
Note:
1. If the ttserver process is killed at the moment of writing, some record blocks in the TCH file may be damaged or some records may be lost. For example, when updating a record, if the original block is not large enough, the old record will be deleted first and then inserted in the new position. If the process is killed at this moment, the record will be lost;
2. After a record is written, if the ttserver process is killed, data will not be lost. After writing, although the data is not completely written to the disk, the record still exists in the kernel buffer. After the process is killed, the data is retained in the kernel buffer, and then the operating system writes the data back;
3. Instant power failure, file damage, record block damage, and data loss are all possible.
4. Double-click the master node and backup node, this is a recommended practice for important data storage. If you want higher data security, you can select applications such as flare or lightcloud to improve security through clustering.
5. ulog logs can be used for data recovery,
6. Do not copy data when the server is running. It is very likely that the copied data file is bad. There are three methods:
· using the ulog file to copy data using data recovery: tcrmgr restore-port 20000 192.168.0.100 $ (PWD) /old_ulog/
· Create a ttserver and point the master to the server to be copied, in this way, you can copy the data on the old server
· use the tcrmgr tool to back up the offline file (do not directly CP): tcrmgr copy-port 20000 192.168.0.100 $ (PWD) /temp/test2.tch. It should be noted that this tool will lock the entire database and should not be executed when access is frequent.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.