Preliminary study on Percona XtraDB Cluster

Source: Internet
Author: User
Tags percona

      percona xtradb cluster (hereinafter referred to as the PXC cluster) provides a way to implement MySQL high availability. The PXC cluster is composed of nodes (at least 3 nodes are recommended, and two nodes are discussed later), each of which is based on the regular mysql/percona server, meaning that you can either join the existing Server to the cluster or isolate a node from the cluster for use alone. Each node in the cluster contains the complete data. The &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;PXC cluster consists of two main parts: Percona server with xtradb and Write set  replication patches (using galera library, a common synchronous, multi-master replication plug-in for transactional applications). PXC Features and benefits:       1, synchronous replication       2, multi-master replication support        3, supporting parallel replication       4, as a highly available scheme, is relatively simple and straightforward in its structure and implementation compared to other schemes PXC limitations and disadvantages: Replication for       1, current version (5.6.20) only supports InnoDB engines, and changes to other storage engines are not replicated. However, the DDL (data definition language) statement is replicated at the statement level, and changes to the Mysql.* table are replicated based on this. For example Create user ... The statement will be copied, but Insert into mysql.user ... Statement is not. (You can also turn on the copy of the MyISAM engine with the Wsrep_replicate_myisam parameter, but this is an experimental parameter).       2, due to the mechanism of internal consistency control within the PXC cluster, the transaction may be terminated for the following reasons: The cluster allows two nodes to be notified of two of the same line of executionTransaction, but only one succeeds and the other terminates, and the cluster returns a deadlock error (error: 1213 sqlstate: 40001  (er_lock_deadlock)) .       3, write efficiency depends on the weakest one in the node, because the PXC cluster uses the strong consistency principle, and a change operation succeeds at all nodes. The following three aspects of installation deployment and functional and performance testing begin the PXC journey. Installation Deployment Lab Environment: Three servers, information as follows:    node  #1            hostname: percona1          IP:  192.168.1.35    node  #2            hostname: percona2          ip: 192.168.1.36     node  #3hostname:  percona3ip: 192.168.1.37 Note:firewall has been  set up to allow connecting to ports 3306, 4444,4567 and  4568selinux isdisabled If you do not close SELinux and start other (Node1) nodes, the "[Error] wsrep:permission denied" one is recorded in the error log.     in three SERVer installing percona-xtradb-cluster-56          1, installing Epel source  yum &NBSP;INSTALL&NBSP;HTTP://DL.FEDORAPROJECT.ORG/PUB/EPEL/6/I386/EPEL-RELEASE-6-8.NOARCH.RPM (contains a comparison of all new software sources.) The key is to include the Socat software that Preconaxtradb cluster relies on) vi  /etc/yum.repos.d/epel.repo   (Modify Epel original configuration to use Epel source) to remove the comments of baseurl lines, and mirror line note 2, install Precona xtradb cluster required expansion pack, in case of late configuration error shell > yuminstall -y cmake gcc gcc-c++ libaio libaio-devel automake  autoconf bzr bisonlibtool ncurses5-devel boost           3, installation configuration precona  official Yum source yum install http://www.percona.com/downloads/ Percona-release/percona-release-0.0-1.x86_64.rpm         4, Installing socatsocat is a relay for bidirectional data transfer between  Two independentdata channels. (Socat is a one in two aloneRelay between two-way transmission of data) after configuring the Epel source, you can execute the  yum install socat* directly if you cannot install Socat with Yum, follow these steps to compile the installation wget   http://www.dest-unreach.org/socat/download/socat-1.7.2.4.tar.gztar zxvf   socat-1.7.2.4.tar.gz./configuremake && make install         5, installing Perl components (xtrabackup required components) yum install perl-dbd-mysql  perl-dbi   Perl-time-hires        6, installing percona-xtradb-cluster and its related components yum  install percona-xtradb-cluster-56   (e.g.) two.   Initialize the Percona-xtradb-cluster cluster to perform a cluster initialization operation on any node (typically Node1) to create/ETC/MY.CNF, as follows [mysqld]datadir=/var/lib/ Mysqluser=mysql# path to galeralibrarywsrep_provider=/usr/lib64/libgalera_smm.so# cluster &NBSP;CONNECTIONURL&NBSP;CONTAINS&NBSP;THE&NBSP;IPS&NBSP;OF&NBSP;NODE#1,&NBSP;NODE#2&NBSP;AND&NBSP;NODE#3---- ip# of all nodes when starting the Node1 node for the first time, the IP of each node is not written here, it needs to be written in the following line configuration wsrep_cluster_address=gcomm://#第一次启动node1 (Initialize the cluster), you need to change to the following line configuration#wsrep_cluster_address =gcomm://192.168.1.35,192.168.1.36,192.168.1.37# in order for  galerato work correctly binlog format should be rowbinlog_format=row#  myisam storageengine has only experimental supportdefault_storage_engine=innodb#  this changes howinnodb autoincrement locks are managed and is  a requirement for Galerainnodb_autoinc_lock_mode=2# Node  #1   Address----The way synchronization between native Ipwsrep_node_address=192.168.1.35# sst method----nodes Wsrep_sst_method=xtrabackup-v2 # cluster namewsrep_cluster_name=my_centos_cluster# authentication forsst  Method----To do data synchronization between nodes account password Wsrep_sst_auth= "Sstuser:s3cret" Note: The first time you start Node1 (Initialize the cluster), the configuration file Wsrep_cluster_address=gcomm ://   do not need to add each node IP, otherwise the other nodes will not start, when the initialization is complete, you need to modify here to add each node IP in node1 execution/ETC/INIT.D/MYSQL&NBSP;&NBSP;BOOTSTRAP-PXC   to initialize the cluster, and then modify the configuration file my.cnf will wsrep_cluster_address=gcomm://  line to  wsrep_cluster_address=gcomm://192.168.1.35,192.168.1.36,192.168.1.37 then, execute service  mysqlrestart   to this point, the cluster initialization is complete, and the Node1 boot setting mysql root password [email protected]>update  Mysql.user set password=password ("Passw0rd")  whereuser= ' root '; [email protected]>flush privileges; set the account password for replication [email protected]>grant reload,  lock tables, replication client on *.* to  ' sstuser ' @ ' localhost ' identified  by  ' S3cret '; three.   Configure Node2, NODE3, and start copying Node1/etc/my.cnf content to Node2 and node3, and modify the wsrep_node_address=192.168.1.35 to native IP. Then start Node2, node3 . Executive Servicemysql start Note: Node2, NODE3 will change the settings from the Node1 Sync account             After a successful configuration, if all the node instances in the cluster are shut down (crash), you need to select an entity that has the latest              the node of the data as the master node, execute  /etc/init.d/mysql bootstrap-pxc  start;  and then start the section                    Point. Cluster high-availability validation 1.  Create a database in Node1 Node2 Enter a new library, create a table in Node3 insert a record into a new table then, in each node query, the configuration is normal, any node can see the newly inserted record.        2.  Stop the MySQL service service mysql stop on any two nodes (e.g. Node1, node3), Then in Node2 perform the insert operation, and then start Node1 and node3, under normal circumstances, you can see the record write efficiency of the Node2 Insert test method is 1, three nodes are imported as separate server a SQL file, perform three time averaging     time mysql -uroot -pxxx system < /system.sql    The average time is about 6M20S2, open the full cluster, perform the import operation in three nodes, take three execution time average     average time to 7m50s after preliminary test write performance PXC cluster compared to a single server down about 12%.

Percona XtraDB Cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.