Directory
Openstack-mitaka Overview of High Availability
Openstack-mitaka High-availability environment initialization
Openstack-mitaka High-availability Mariadb-galera cluster deployment
Openstack-mitaka High Availability of memcache
Openstack-mitaka High Availability pacemaker+corosync+pcs High availability cluster
Openstack-mitaka Highly Available certification services (Keystone)
Openstack-mitaka High Availability Computing Service (Nova)
Openstack-mitaka High-Availability Network Service (Neutron)
Openstack-mitaka High Availability of Dashboard
Openstack-mitaka high Availability to start an instance
Openstack-mitaka High Availability Test
Introduction and Features
MariaDB Galera Cluster Main features:
(1) Synchronous replication of multiple node data
(2) Each node is the primary node, each node has all the data
(3) Each node can provide read and write operations
(4) Failure node automatic culling, new node join automatic synchronization (cautious, new node join will result in lock table)
Advantages:
(1) Multi-master structure, there is no synchronization delay problem. (Master-slave is asynchronous synchronous data, Galera synchronous data)
(2) There is no transaction loss (pending validation)
(3) The node can be read and write, the client can be arbitrarily connected to the node, enhance the load capacity
Disadvantages:
(1) When adding a new node, synchronizing the data will cause the lock table
(2) For write operations to occur on each node
(3) How many nodes have the number of data
(4) Network instability, there will be a brain fissure situation, service will not be available, not applicable to the production environment of important data
(5) Support for INNODB/XTRADB storage engine only
Work Flow chart
When the client sends a COMMIT command, all changes to the data are collected by Write-set before the transaction is committed, and the contents of the Write-set record are sent to the other nodes.
write-set will use the searched primary key for confirmation testing on each node, and the test result determines whether the node applies write-set change data. If the authentication test fails, the node discards the write-set, or if the test succeeds, the transaction commits. Work as follows:
The construction of Galera cluster
The following operations are performed on three nodes:
Yum Install Mariadb-galera-server mariadb-client galera-y
After the installation of the configuration file:
# ll/etc/my.cnf.d/ Total --rw-r--r--1Root root295Oct - ,: atclient.cnf-rw-r--r--1Root root232Oct - ,: atmysql-clients.cnf-rw-r--r--1Root root1007The - -: -server.cnf-rw-r--r--1Root root285The - -: -Tokudb.cnf
Start the database
#/etc/init.d/mysql Start
Hardening the Database
# mysql_secure_installation
My password here is 123456, not consistent.
Authorized Authenticated Users:
# mysql-p123456
To add a cluster authentication User:
' Galera '@'%'galera' with GRANT OPTION; MariaDB [(None)]> Flush privileges;
Stop all nodes MySQL service
#/etc/init.d/mysql Stop
Add the following in the [MARIADB] module:
Controller1:
[Mariadb]query_cache_size=0# Close Query cache Binlog_format=Row # binlog file format: line Default_storage_engine=InnoDB # Mariadb storage Engine Innodb_autoinc_lock_mode=2#主键自增模式修改为交叉模式wsrep_provider=/usr/lib64/galera/libgalera_smm.so #galera library file wsrep_cluster_address=gcomm://192.168.0.12,192.168.0.13 # galera cluster URLWsrep_cluster_name='OpenStack'# galera Cluster name wsrep_node_address='192.168.0.11'# The address of the node Wsrep_node_name='Controller1'# The host name of the node Wsrep_sst_method=rsync # Copy mode Wsrep_sst_auth=galera:galera # Galera cluster authentication User: Password
Controller2:
[Mariadb]query_cache_size=0Binlog_format=Rowdefault_storage_engine=Innodbinnodb_autoinc_lock_mode=2Wsrep_provider=/usr/lib64/galera/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.0.11,192.168.0.13Wsrep_cluster_name='OpenStack'wsrep_node_address='192.168.0.12'Wsrep_node_name='Controller2'Wsrep_sst_method=Rsyncwsrep_sst_auth=galera:galera
Controller3:
[Mariadb]query_cache_size=0Binlog_format=Rowdefault_storage_engine=Innodbinnodb_autoinc_lock_mode=2Wsrep_provider=/usr/lib64/galera/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.0.11,192.168.0.12Wsrep_cluster_name='OpenStack'wsrep_node_address='192.168.0.13'Wsrep_node_name='Controller3'Wsrep_sst_method=Rsyncwsrep_sst_auth=galera:galera
Here, starting the first cluster node is a bit special:
Controller1:/etc/init.d/mysql bootstrapController2:/etc/init.d/MySQL StartController3 :/etc/init.d/mysql Start
Log on to any node for verification:
# mysql-p123456mariadb [(none)]'wsrep_cluster_size';
MariaDB [(none)] ' ws% ';
Comments:
The Wsrep_cluster_status is primary, which indicates that the node is the primary node and reads normally.
Wsrep_ready is on to indicate that the cluster is functioning properly.
The wsrep_cluster_size is 3, which indicates that the cluster has three nodes.
To create a database for testing:
MariaDB [(None)]> CREATE database ABCD;
Then, log on to the other node database to see if it exists.
MariaDB [(None)]> show databases;
Mariadb-galera cluster is complete.
After the cluster has been built, write other configuration files:
# vim Server.cnf[mariadb-10.0]port=3306bind_address=192.168.0.11Tmpdir= /Tmpskip-external-Lockingskip-name-resolvemax_connections=3600Innodb_flush_log_at_trx_commit=2innodb_log_file_size=100minnodb_log_files_in_group=5thread_concurrency= -innodb_thread_concurrency= -innodb_commit_concurrency= -character-set-server =utf8collation-server =Utf8_general_cievent_scheduler=Onmax_allowed_packet= 20M
All three controller nodes need to listen to their respective management addresses.
Note:
When all nodes are down, start the node again and the Mariadb-galera cluster will fail to start.
Mariadb-galera cluster boot is sequential and follows a principle: the last outage starts first, because the cluster considers the node's data to be up-to-date.
[Openstack] Openstack-mitaka High-availability Mariadb-galera cluster deployment