One-click creation of a blackhole slave Database

Source: Internet
Author: User
Tags percona

One-click creation of a blackhole slave Database
Driven by business

Some time ago, Weibo's two-way attention to business splitting increased the number of database instances several times. Correspondingly, we need to build many blackhole slave databases for these instances, the binlog analysis program can analyze the binlog and import it to redis. The entire process is boring, especially the setup of blackhole's slave database has consumed a lot of energy. So I have the idea of writing a tool.

Step by step

Previously, when we set up the blackhole slave database, we first dumped the table structure of the online master database or the standby database, then copied the entire master database permission library, monitoring database, and so on, and then changed the master database. This method works well when statement-level format replication is used. However, when the row format is changed to copy, the blackhole slave database often gets stuck in the database. I traced down the cause and found that the problem occurred during the process of copying the permission database and monitoring database [All MyISAM tables. If no lock is applied during the copy process, record the binlog pos point in the master database through show master status after the copy is completed, and use this point as the pos point of change master. This method may pose a hidden risk, in the end, the slave database is stuck.
After understanding the requirements and disadvantages of the previous methods, I found that it is not easy to build a blackhole slave database for the active master database. After some discussions with colleagues, I referred to the percona xtrabackup Hot Standby database method and implemented this tool. If you are interested in the code, you can directly read it. The article also provides detailed descriptions later. Welcome to shoot bricks.

MySQL management-using XtraBackup for Hot Backup

MySQL open-source backup tool Xtrabackup backup deployment

MySQL Xtrabackup backup and recovery

Use XtraBackup to implement MySQL master-slave replication and quick deployment [master-slave table lock-free]

Install and use Xtrabackup from Percona to back up MySQL

For source code download, see Description

This tool requires the user to enter two parameters: the IP address of the master database and the database port of the master database. These two parameters uniquely determine an instance.

Description

Usage: example: blackhole_slave.py-f 10.66.10.10-P 3307

Options:
-- Version show program's version number and exit
-H, -- help show this help message and exit
-F IP, -- master-ip = IP
Master ip address here
-P port, -- master-port = PORT
Master port here

Detailed ideas
  1. Install MySQL according to the standard MySQL installation process (in fact, it is to copy binary files, adduser, chown and so on from the company's internal sources, which have been automated ). Our online MySQL databases are installed under/data1/my $ {port} by default. For example, if the port number is 3306, the directory name is mysql3306. This is mainly to start multiple instances on a single machine and make full use of the multi-core cpu.
  2. Modify the configuration file. We recommend that you add skip-innodb to the configuration file of the blackhole database and change the default engine to myisam. The advantage of this is that when some innodb tables are modified in the master database, these statements are synchronized to the slave database. If innodb is disabled in the slave database, the statements are automatically converted to blackhole, this is exactly what we need. Note that if the binlog of the master database uses statement, the blackhole slave database can only use the statement level. The master database uses row format and the slave database can only use row format. The configuration file of the blackhole slave database is already included in the Code. For more information, see.
  3. Start a MySQL instance. We call our own startup script.
  4. Obtain the online database and table structure. For security purposes, our online root account does not have any permission to drop databases and tables. The drop DATABASE method is weird here, And the rm library folder method is used. In this step, all required database and table structures are obtained online, and the engine of some tables is modified as needed as blackhole. Of course, tables in the monitoring database and permission database cannot be modified to blackhole.
  5. Obtain data from the permission database and monitoring database. This process will first obtain all myisam tables required on blackhole, and then use lock tables t1 read, t2 read, t3 read ,... To lock all related tables in the master database. Export the data of these locked tables and import them to the blackhole slave database. After importing the data, obtain the pos point of the master database and release the lock. The export method here cannot be mysqldump-master-data. This method adds a lock to all tables, which affects online services. Using flush tables with read lock will face the same problem.
  6. Change master to on the blackhole slave database based on the obtained pos point of the master database. Check the synchronization status.
    So far, blackhole's slave database has been built. Finally, we will summarize the usage of the blackhole slave database and look forward to your new discoveries.
Usage of BlackHole
  • Binlog backup

The binlog of online MySQL is generally retained for 3-5 days, but for important services, binlog may need to be retained for one month or even half a year. Online servers do not have such a large amount of space. If they are retained for up to 10 days, they will be purged. At this time, blackhole is useful. Blackhole slave database + xtrabackup regular Hot Standby + @ plinux binlog flashback can restore your database to any point in time.

  • Serves as the binlog API to provide data sources for other programs

We have a lot of businesses online, hanging behind the blackhole slave database. Our self-developed binlog analysis program analyzes binlog in time and inserts the corresponding data into redis. In this case, the blackhole slave database acts as a natural binlog API.

  • Saves bandwidth in cross-IDC Deployment Scenarios

If a master database is dragging 20 slave databases, the master database may be overwhelmed. In this case, you can consider adding a blackhole slave database to the master database as a relay. Although there are many unreliable scenarios, however, if the master database is located in Beijing and 20 slave databases are located in Guangzhou, this solution makes sense: Add a blackhole in Guangzhou and put 20 slave databases under blackhole, this saves a lot of bandwidth from Beijing to Guangzhou. The saved money should be more than enough to enhance blackhole's HA.

  • Test the effect of enabling binlog on server IO

    Use blackhole to test the impact of enabling binlog on server IO. According to official MySQL Data, enabling binlog will reduce IO performance by 3-5%. Is this data a bit "conservative "?

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.