Back up MYSQL databases using Python

Source: Internet
Author: User
Work needs: back up the company's MYSQL database, catch up with the first to learn python, read a simple python tutorial video, and write a backup script, personal representation of pyt... work needs: back up the company's MYSQL database, catch up with the first to learn python, read a simple python tutorial video, and write a backup script, personally, I want to record how to use python class, function, build-in function, private variable, global variable, and so on, we also welcome the passing pythoner to enlighten us.

Known issues:

1. this script must require all the behavior key = value formats in the mysql configuration file, and no additional comments are allowed. Otherwise, the ConfigParser module will encounter errors when parsing the configuration file, I have not studied whether ConfigParser can call fault-tolerant methods or write fault-tolerant methods. the ini configuration file meets the ConfigParser requirements. The mysql configuration file I used will be attached later.

2. a large number of private member variables are used, because I have no idea about python variables, class methods, and so on. What are the differences between them, only when the private member variables of the class are imported or inherited in other scripts, they are invisible.

3. there are many file operations and value transfer operations. Currently, it is only possible to pass values in the correct format without redundant error tolerance. 4. a large number of strings are concatenated. the first time I write O & M-related scripts, the subprocess module is not required because I want to call system commands and pass many parameters, I don't know what the O & M scripts are written by others, so I directly splice them.

5. other unknown bugs, undiscovered logical errors, and so on.

Environment:

-Server: Dell PowerEdge T110

-OS: CentOS 6.3 _ x86_64

-PythonVersion: 2.7.3

-MysqlVersion: 5.5.28 linux x86_64

MysqlBackupScript. py

#! /Usr/bin/env python # coding: utf8 # script MysqlBackupScript # by Becareful # version v1.0 "This scripts provides auto backup mysql (version = 5.5.x) database. "import osimport sysimport datetime # date used to generate the backup file import linecache # specified line used to read the file import ConfigParser # parsing mysql configuration file class DatabaseArgs (object ): "_ MYSQL_BASE_DIR = r'/usr/local/mysql' # mysql installation directory _ MYSQL_BIN_DIR = _ MYSQL_BASE_DIR + '/bin' # Mysql binary directory _ MYSQL_CONFIG_FILE = r'/usr/local/mysql/my. cnf '# mysql configuration file _ ONEDAY = datetime. timedelta (days = 1) # The duration of a day. it is used to calculate the date _ TODAY = datetime of the previous day and the next day. date. today () # The date format for the YYYY-MM-DD _ YESTERDAY = _ TODAY-_ ONEDAY # calculate YESTERDAY's date _ TOMORROW = _ TODAY + _ ONEDAY # calculate TOMORROW's date _ WEEKDAY = _ TODAY. strftime ('% W') # calculate the day of the week _ MYSQL_DUMP_ARGS ={# use a dictionary to store the mysqldump command to back up the database parameter 'myisam ': '-v-E-e-R -- trigger S-F-n -- opt -- master-data = 2 -- hex-blob-B ', 'innodb ': '-v-E-e-R -- triggers-F -- single-transaction-n -- opt -- master-data = 2 -- hex-blob-B'} _ DUMP_COMMAND = _ _ MYSQL_BIN_DIR + '/mysqldump' # Use the mysqldump command path to dump mysql data _ FLUSH_LOG_COMMAND = _ MYSQL_BIN_DIR + '/mysqladmin' # mysqladmin command path, run flush-logs to generate daily incremental binlog _ BACKUP_DIR = r'/backup/'# specify the directory where the backup file is stored _ PROJECTNAME = 'example '# Specifies the project name of the database to be backed up, will generate files such as projectname-YYYY-MM-DD. SQL _ DATABASE_LIST = [] # specifies the name of the database to be backed up, can be multiple, use list _ HOST = 'localhost' _ PORT = 3306 _ USERNAME = 'root' _ PASSWORD = ''_ LOGINARGS ='' # If you log on to localhost, A password is required. you can set the login parameters. The following describes _ LOGFILE = _ BACKUP_DIR + '/backup. logs 'Def _ init _ (self, baseDir =__ MYSQL_BASE_DIR, backDir =__ BACKUP_DIR, engine = 'myisam', projectName =__ PROJECTNAME, dbList =__ DATABASE _ LIST, host =__ HOST, port =__ PORT, user =__ USERNAME, passwd =__ PASSWORD): "parameters passed in when an object is instantiated, if you do not input the default private member variable of the class as the default value: param baseDir: param backDir: param engine: param projectName: param dbList: param host: param port:: param user: param passwd: "" self. _ MYSQL_BASE_DIR = baseDir self. _ BACKUP_DIR = backDir self. _ PROJECTNAME = projectName self. _ DATABASE_LIST = dbList self. _ HOST = host self. _ POR T = port self. _ USERNAME = user self. _ PASSWORD = passwd self. _ ENGINE = self. _ MYSQL_DUMP_ARGS [engine] # The following defines the format "-hlocalhost-uroot -- password = 'xxxx'" self. _ LOGINARGS = "-h" + self. _ HOST + "-P" + str (self. _ PORT) + "-u" + self. _ USERNAME + "-- password = '" + self. _ PASSWORD + "'" self. checkDatabaseArgs () # Call the check function def _ getconfig (self, cnf =__ MYSQL_CONFIG_FILE, item = None ):# A small function used to parse the mysql configuration file. it is encapsulated and a value is passed as my. cnf key to find the corresponding value _ mycnf = ConfigParser. configParser () _ mycnf. read (cnf) try: return _ mycnf. get ("mysqld", item) failed t BaseException, e: sys. stderr. write (str (e) sys. exit (1) def _ getBinlogPath (self): # obtain the absolute path of the binlog to be incrementally backed up every day, from the binlog of mysql. index File takes the second to the last row _ BINLOG_INDEX = self. _ getconfig (item = 'Log-bin') + '. index'if not OS. path. isfile (_ BINLOG_INDEX): sys. stder R. write ('binlog index file: ['+ _ BINLOG_INDEX +'] not found! \ N') sys. exit (1) else: try: _ BINLOG_PATH = linecache. getline (_ BINLOG_INDEX, len (open (_ BINLOG_INDEX, 'r '). readlines ()-1) linecache. except ache () encode T BaseException, e: sys. stderr. write (str (e) sys. exit (1) return _ BINLOG_PATH.strip () def flushDatabaseBinlog (self): # By calling this function, mysqladmin flush-logs will be executed to refresh the binlog return OS. popen (self. _ FLUSH_LOG_COMMAND + self. _ LOGINARGS + 'flush-logs') def du MpDatabaseSQL (self): # | use mysqladmin to perform full backup for the specified database if not OS. path. isfile (self. _ BACKUP_DIR + '/' + self. _ PROJECTNAME + '/' + str (self. _ YESTERDAY) + '-' + self. _ PROJECTNAME + '. SQL '): return OS. popen (self. _ DUMP_COMMAND + self. _ LOGINARGS + self. _ ENGINE + ''. join (self. _ DATABASE_LIST) + '>' + self. _ BACKUP_DIR + '/' + self. _ PROJECTNAME + '/' + str (self. _ YESTERDAY) + '-' + self. _ PROJECTNAME + '. SQL ') else: sys. stderr. write ('backup File ['+ str (self. _ YESTERDAY) + '-' + self. _ PROJECTNAME + '. SQL] already exists. \ n') def dumpDatabaseBinlog (self): # Copy the binlog to be backed up to the specified backup directory if not OS through copy2. path. isfile (self. _ BACKUP_DIR + '/' + self. _ PROJECTNAME + '/' + str (self. _ YESTERDAY) + '-' + OS. path. split (self. _ getBinlogPath () [1]): from shutil import copy2 try: copy2 (self. _ getBinlogPath (), s Elf. _ BACKUP_DIR + '/' + self. _ PROJECTNAME + '/' + str (self. _ YESTERDAY) + '-' + OS. path. split (self. _ getBinlogPath () [1]) handle T BaseException, e: sys. stderr. write (str (e) else: sys. stderr. write ('binlog File ['+ str (self. _ YESTERDAY) + '-' + OS. path. split (self. _ getBinlogPath () [1] + '] already exists \ n') def checkDatabaseArgs (self): # Check Required conditions _ rv = 0 if not OS. path. isdir (self. _ MYSQL_BASE_D IR): # check whether sys exists in the specified mysql installation directory. stderr. write ('MySQL base dir: ['+ self. _ MYSQL_BASE_DIR + '] not found \ n') _ rv + = 1 if not OS. path. isdir (self. _ BACKUP_DIR): # check whether the specified backup directory exists. if not, the system is automatically created. stderr. write ('backup DIR: ['+ self. _ BACKUP_DIR + '/' + self. _ PROJECTNAME + '] not found, auto created \ n') OS. makedirs (self. _ BACKUP_DIR + '/' + self. _ PROJECTNAME) if not OS. path. isfile (self. _ MYSQL_CONFIG_FIL E): # check whether the mysql configuration file exists sys. stderr. write ('MySQL config file: ['+ self. _ MYSQL_CONFIG_FILE + '] not found \ n') _ rv + = 1 if not OS. path. isfile (self. _ DUMP_COMMAND): # check whether the mysqldump command used for database backup exists sys. stderr. write ('MySQL dump command: ['+ self. _ DUMP_COMMAND + '] not found \ n') _ rv + = 1 if not OS. path. isfile (self. _ FLUSH_LOG_COMMAND): # check whether the mysqladmin command used to refresh the mysql binlog has sys. stderr. write ('MySQL FLUSH Log command: ['+ self. _ DUMP_COMMAND + '] not found \ n') _ rv + = 1 if not self. _ DATABASE_LIST: # check whether the list of databases to be backed up exists in sys. stderr. write ('database List is None \ n') _ rv + = 1 if _ rv: # judge the return value. because any of the above steps fails to be checked, _ rv value + 1, as long as the last _ rv! If it is set to 0, the system exits directly. Sys. exit (1) def crontab (): # use the dictionary to pass related parameters, instantiate the object, and call the relevant method to perform operations zabbix = {'basedir ': '/usr/local/mysql/', 'backdir': '/backup/', 'projectname': 'monitor', 'dblist': ['zabbix'], 'host': 'localhost', 'port': 3306, 'user': 'root', 'passwd': 'xxxxxxx'} monitor = DatabaseArgs (** zabbix) monitor. dumpDatabaseSQL () monitor. dumpDatabaseBinlog () monitor. flushDatabaseBinlog () if _ name _ = '_ main _': crontab ()

My. cnf

[client]port                            = 3306socket                          = /mysql/var/db.socket  [mysqld]socket                          = /mysql/var/db.socketdatadir                         = /mysql/db/skip-external-locking           = 1skip-innodb                     = 0key_buffer_size                 = 256Mmax_allowed_packet              = 10Mtable_open_cache                = 2048sort_buffer_size                = 4Mread_buffer_size                = 4Mread_rnd_buffer_size            = 8Mmyisam_sort_buffer_size         = 64Mmyisam_max_sort_file_size       = 1Gmyisam_repair_threads           = 1myisam_recover                  = DEFAULTthread_cache_size               = 32query_cache_size                = 32Mquery_cache_min_res_unit        = 2kbulk_insert_buffer_size         = 64Mtmp_table_size                  = 128Mthread_stack                    = 192Kskip-name-resolve               = 1max_connections                 = 65500default-storage-engine          = myisamfederated                       = 0server-id                       = 1slave-skip-errors               = all#log                            = /var/log/sql_query.logslow-query-log                  = 1slow-query-log-file             = /mysql/log/sql_query_slow.loglong-query-time                 = 5log-queries-not-using-indexes   = 1log-slow-admin-statements       = 1log-bin                         = /mysql/var/log/binlog/bin-loglog-error                       = /mysql/var/log/mysql.errmaster-info-file                = /mysql/var/log/master.inforelay-log                       = /mysql/var/log/relay-bin/relay-binrelay-log-index                 = /mysql/var/log/relay-bin/relay-bin.indexrelay-log-info-file             = /mysql/var/log/relay-bin/relay-bin.infobinlog_cache_size               = 8Mbinlog_format                   = MIXEDmax_binlog_cache_size           = 20Mmax_binlog_size                 = 1Gbinlog-ignore-db                = mysqlbinlog-ignore-db                = performance_schemabinlog-ignore-db                = information_schemareplicate-ignore-db             = mysqlreplicate-ignore-db             = performance_schemareplicate-ignore-db             = information_schema  innodb_data_home_dir            = /mysql/ibdata/innodb_data_file_path           = ibdata:156M:autoextendinnodb_log_group_home_dir       = /mysql/ibdata/log-slave-updates               = 0back_log                        = 512transaction_isolation           = READ-COMMITTEDmax_heap_table_size             = 246Minteractive_timeout             = 120wait_timeout                    = 120innodb_additional_mem_pool_size = 16Minnodb_buffer_pool_size         = 512Minnodb_file_io_threads          = 4innodb_thread_concurrency       = 8innodb_flush_log_at_trx_commit  = 2innodb_log_buffer_size          = 16Minnodb_log_file_size            = 128Minnodb_log_files_in_group       = 3innodb_max_dirty_pages_pct      = 90innodb_lock_wait_timeout        = 120innodb_file_per_table           = 1innodb_open_file                = 327500open_files_limit                = 327500  [mysqldump]quick                           = 1max_allowed_packet              = 50M  [mysql]auto-rehash                     = 1socket                          = /mysql/var/db.socketsafe-updates                    = 0  [myisamchk]key_buffer_size                 = 256Msort_buffer_size                = 256Mread_buffer                     = 2Mwrite_buffer                    = 2M  [mysqlhotcopy]interactive-timeout             = 100

The final backup directory structure is as follows:

[root@zabbix backup]# find ./././Monitor./Monitor/2013-03-16-bin-log.000008./Monitor/2013-03-14-bin-log.000006./Monitor/2013-03-16-Monitor.sql./Monitor/2013-03-15-Monitor.sql./Monitor/2013-03-15-bin-log.000007./Monitor/2013-03-14-Monitor.sql  ~END~

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.