This is a creation in Article, where the information may have evolved or changed.
Background knowledge
TIDB as a distributed database, configuring installation Services on multiple nodes is cumbersome, and using automated tools for batch deployment is a good choice in order to simplify operations and facilitate management.
Ansible is a Python-based automated operation and maintenance tools, combining the advantages of many old operations and maintenance tools to achieve batch operating system configuration, batch program deployment, batch Run command and other functions, and simple to use, only need to install the Ansible program on the management station to configure the IP information of the managed host , the managed host has no clients. For these reasons, we use automated tools Ansible to install configurations and deploy TIDB in batches.
Let's describe how to use Ansible to deploy TIDB.
The TIDB installation environment is configured as follows
The operating system uses CentOS7.2 or later, and the file system uses EXT4.
Description: A low-version operating system (such as CentOS6.6) and XFS file system will have some kernel bugs that can affect performance and we do not recommend it.
We chose to use 3 PD, 2 tidb, 3 tikv, and here's a brief look at why we're deploying this.
- For PD. The PD itself is a distributed system, consisting of multiple nodes that comprise a whole, and at the same time have only one master node providing services externally. Each node through the election algorithm to determine the main node, the election algorithm requires the number of nodes is an odd number (2n+1), 1 nodes of high risk, so we choose to use 3 nodes.
- For TIKV. TIDB the underlying uses distributed storage, we recommend using an odd (2n+1) backup, and the data is still available after you hang up n backups. With a 1 backup or a 2 backup, one node hangs up, causing some of the data to be unavailable, so we choose to use 3 nodes and set 3 backups (the default).
- For TIDB. Our tidb is stateless, the existing cluster of TIDB service pressure, you can directly add TIDB services in other nodes, without unnecessary configuration. We chose to use two tidb to do HA and load balancing.
- Of course, if you just test the cluster, you can use one PD, one tidb, three tikv (less than three to modify the number of backups)
Download the TIDB installation package and unzip it
#创建目录用来存放 ansible 安装包mkdir /root/workspace #切换目录cd /root/workspace #下载安装包wget https://github.com/pingcap/tidb-ansible/archive/master.zip #解压压缩包到当前目录下unzip master.zip #查看安装包结构,主要内容说明如下cd tidb-ansible-master && ls
Part content meaning
ansible.cfg: ansible 配置文件inventory.ini: 组和主机的相关配置conf: TiDB 相关配置模版group_vars: 相关变量配置scripts: grafana 监控 json 模版local_prepare.yml: 用来下载相关安装包bootstrap.yml: 初始化集群各个节点deploy.yml: 在各个节点安装 TiDB 相应服务roles: ansible tasks 的集合start.yml: 启动所有服务stop.yml: 停止所有服务unsafe_cleanup_data.yml: 清除数据unsafe_cleanup.yml: 销毁集群
Modifying a configuration file
The distribution of cluster nodes is mainly configured, along with the installation path.
The TIDB Service (other similar) is installed on the machine in the Tidb_servers group, and all services are installed by default under the variable Deploy_dir path.
#将要安装 node of the TIDB service [tidb_servers]192.168.1.102192.168.1.103# will be installed TIKV node of the service [tikv_servers]192.168.1.104192.168.1.105192.168.1.106# the node that will install the PD service [pd_servers] 192.168.1.101192.168.1.102192.168.1.103# the node that will install the Promethues service # monitoring part[monitoring_servers]192.168.1.101# The node where the Grafana service will be installed [grafana_servers]192.168.1.101# the node that will install the Node_exporter service [monitored_servers:children]tidb_ Serverstikv_serverspd_servers[all:vars] #服务安装路径, each node is the same, according to the actual situation configuration Deploy_dir =/home/tidb/deploy## connection# mode one: Use Root user installation # ssh via root:# ansible_user = root# Ansible_become = true# Ansible_become_user = tidb# Mode II: Install with normal user (requires sudo permission) # ssh via normal useransible_user = tidb# Cluster name, custom can cluster_name = test-cluster# Miscenable_elk = FALSEENABLE_FIREWALLD = FALSEENABLE_NTPD = false# Binlog trigger# whether to turn on Pump,pump generation TIDB Binlog #如果有从此 tidb the need for cluster synchronization data, you can change to True to turn on enable_binlog = False
The installation process can be divided into root user installation and normal user installation two ways. The root user is of course the best, modify the system parameters, create directories, etc. will not involve insufficient permissions, can be directly installed to complete. However, some environments do not directly give root privileges, and this scenario needs to be installed by a normal user. For ease of configuration, we recommend that all nodes use the same common user, and that we need to give sudo permission to this normal user in order to meet the permissions requirements. the detailed procedures for the two installation methods are described below, and you will need to manually start the service after installation is complete.
1. Install using the root user
- Download binary package into the downloads directory and unzip the copy to Resources/bin, after which the installation process is to use the binary program under the Resources/bin
ansible-playbook -i inventory.ini local_prepare.yml
- Initializes each node of the cluster. Checks the Inventory.ini configuration file, Python version, network status, operating system version, and so on, and modifies some kernel parameters to create the appropriate directory.
- Modify the configuration file as follows
## Connection# ssh via root:ansible_user = root# ansible_become = trueansible_become_user = tidb# ssh via normal user# ansible_user = tidb
- Execute initialization command
ansible-playbook -i inventory.ini bootstrap.yml -k #ansible-playboo命令说明请见附录
- Install the service. This step installs the appropriate services on the server and automatically sets up the configuration files and required scripts.
- Modify the configuration file as follows
## Connection# ssh via root: ansible_user = root ansible_become = true ansible_become_user = tidb# ssh via normal user# ansible_user = tidb
- Execute the Install command
ansible-playbook -i inventory.ini deploy.yml -k
2. Install with normal user
- Download Binary package to central control machine
ansible-playbook -i inventory.ini local_prepare.yml
ansible-playbook -i inventory.ini deploy.yml -k -K
Start-Stop Service
ansible-playbook -i inventory.ini start.yml -k
ansible-playbook -i inventory.ini stop.yml
Appendix