Install TIDB cluster online

Source: Internet
Author: User
Tags mysql client git clone

    • Server Preparation

Description: TiDB8 needs to be able to connect to the extranet to download various installation packages

TiDB4 is not required, but it is best to have one, because the subsequent test MySQL data synchronization or performance comparison, you need to use

TIKV is best used in the Ext4 file format, so the use of mounting disk (if there is no data disk, then can not be configured to install successfully)

Machine name Ip Operating system Configuration Use
TiDB1 .62 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB2 .63 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB3 .64 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB4 .65 CentOS7.4 X64 4c+8g+260g mysql5.7+ Test Tools
TiDB5 .66 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB6 .67 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB7 .68 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB8 .69 CentOS7.4 X64 4c+8g+60g Central control Machine Ansible+monitor

    • TIKV Data Disk Mount

Operations are required in the TIDB1,TIDB2,TIDB3.

Executive Vi/etc/fstab

adding Mount Parameters

/dev/mapper/centos-home/home  ext4 defaults,nodelalloc,noatime       0 0

Parameter explanation:

Noatime-Does not update inode access records on the file system to improve performance

Nodelalloc every 5 seconds the file system submits the log trigger write-back; Delalloc, the system triggers a write-back at about every 30 seconds. For frequent reads and writes, the write speed can be accelerated.

The last 0 indicates whether the FSCK option is used, and the FSCK command detects the field to determine the order in which the file system is scanned, the root filesystem/pair should have a value of 1, and the other filesystem should be 2. If the file system does not need to be scanned at startup, the field is set to 0.

Unmount the directory and re-mount it

# umount/home# Mount-a

Confirm whether to take effect, if effective, will be more nodelalloc

# mount-t Ext4/dev/mapper/centos-home on/home type EXT4 (rw,noatime,seclabel,nodelalloc,data=ordered)
    • Add TIDB users and set password-free on the console

Add user

# useradd tidb# passwd tidb

If the password is set too short, the password will prompt for less than 8 digits, but only the warning continues. If the settings are not successful, you can modify the Pass_min_len parameter in the Vi/etc/login.defs

Set up a free secret

# Visudo

Add Tidb all= (All) Nopasswd:all to the last row and save

    • Configure user, Ansible, and no-secret on the central control machine

Log in to the console using the TIDB user

Download tidb-ansible

$ cd/home/tidb$ sudo yum-y install git$ git clone-b release-2.0 https://github.com/pingcap/tidb-ansible.git

The version can be viewed on GitHub for updates. If you use the mainline version to performgit clone https://github.com/pingcap/tidb-ansible.git

Installing Ansible

$ sudo yum-y install epel-release$ sudo yum-y install Python-pip curl
$ sudo yum-y install sshpass$ cd tidb-ansible$ sudo pip install-r./requirements.txt$ ansible--versionansible 2.5.0

Configure the encryption of other servers

$ ssh-keygen-t RSA

Generate key, continuous carriage return

Configure the target server with a secret setting , and note that it will be replaced with a real IP.

$ VI hosts.ini[servers]***.***.***.62***.***.***.63***.***.***.64***.***.***.65***.***.***.66***.***.***.67***.* **.***.68
. ***.***.69

[All:vars]username = Tidbntp_server = pool.ntp.org

Use Ansible to perform a secret-free

$ ansible-playbook-i Hosts.ini create_users.yml-k

Inspection

$ ssh ***.***.***.68$ Sudo-su root

If you can login without a password on the console, you can switch to root without a password after logging in. That means the secret setting is successful.

Special attention to the central control machine This machine also to be free of dense, the subsequent installation of components to the central control machine will not have to make special settings

    • Shut down the firewall on all servers

Use TIDB User login, turn off the firewall, and cancel the boot

$ cd/home/tidb/tidb-ansible$ ansible-i hosts.ini all-m shell-a  "Firewall-cmd--state"-b$ sudo systemctl stop fire walld.service$ sudo systemctl disable firewalld.service$ ansible-i hosts.ini all-m shell-a  "Systemctl stop Firewall D.service "-b$ ansible-i hosts.ini all-m shell-a  " systemctl disable Firewalld.service "-B

    • Set NTP

Install NTP service using TIDB User login to the console

$ cd/home/tidb/tidb-ansible
$ ansible-i hosts.ini all-m shell-a "Yum install-y NTP"-B

Configure the console as a time server

$ sudo vi/etc/ntp.conf file make the following changes # Hosts on local network is less restricted. #restrict 192.168.1.0 Mask 255.255.255.0 nomodi FY notraprestrict ***.**.*.0 Mask 255.255.255.0# use public servers from the pool.ntp.org project.# please consider Joinin G The pool (http://www.pool.ntp.org/join.html). Server 127.127.1.0server 0.centos.pool.ntp.org iburstserver 1. centos.pool.ntp.org iburstserver 2.centos.pool.ntp.org iburstserver 3.centos.pool.ntp.org iburst

Note: You need to add a restrict, you can use the current server IP segment, the above is used for confidentiality. Note Fill in your own IP segment.

NTP in native use server 127.127.1.0

Configure NTP configuration for all slave machines

$ sudo vi/etc/ntp.conf file make the following modifications # Use public servers from the pool.ntp.org project.# please consider joining the pool (http: www.pool.ntp.org/join.html). Server ***.***.***.69 iburstserver 0.centos.pool.ntp.org iburstserver 1. Centos.pool.ntp.org Iburst

Iburst: When the server is unreachable, the server is contracted at 8 times times the default packet rate. For the sake of secrecy, please modify the actual IP of the master control machine

Start the NTP service

Log in to the console using TIDB

$ cd/home/tidb/tidb-ansible$ ansible-i hosts.ini all-m shell-a  "systemctl disable Chronyd.service"-b$ ansible-i Hosts.ini all-m shell-a  "systemctl enable Ntpd.service"-b$ ansible-i hosts.ini all-m shell-a  "Systemctl star T Ntpd.service "-B

Verifying NTP Services

$ ansible-i hosts.ini all-m shell-a  "Ntpstat"-b$ ansible-i hosts.ini all-m shell-a  "Ntpq-p"-B

Ntpstat can view the NTP status of the server, NTPQ can view the time servers currently used by each server

    • Configuring cluster planning

This time each server only configures one tikv, and uses the standard directory configuration, if needs one server to configure multi kv or adjusts the data catalog, please refer to the Official document

Machine name Ip Operating system Configuration Use
TiDB1 .62 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB2 .63 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB3 .64 CentOS7.4 X64 4c+8g+60g+200g extension Tikv+tispark
TiDB4 .65 CentOS7.4 X64 4c+8g+260g mysql5.7+ Test Tools
TiDB5 .66 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB6 .67 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB7 .68 CentOS7.4 X64 4c+8g+60g Tidb+pd
TiDB8 .69 CentOS7.4 X64 4c+8g+60g Central control Machine Ansible+monitor

Use TIDB to log in to the console for planning configuration

$ cd/home/tidb/tidb-ansible$ VI Inventory.ini Configure the server IP under each configuration item # TIDB Cluster part[tidb_servers]***.***.**66***.***.** 67***.***.**68[tikv_servers]***.***.**62***.***.**63***.***.**64[pd_servers]***.***.**66***.***.**67***.***.** 68[spark_master]***.***.**62[spark_slaves]***.***.**63***.***.**64## monitoring Part# Prometheus and Pushgateway servers[monitoring_servers]***.***.**69[grafana_servers]***.***.**69# Node_exporter and Blackbox_exporter servers[ monitored_servers]***.***.**62***.***.**63***.***.**64***.***.**65***.***.**66***.***.**67***.***.**68***.***. **69

Please replace * * * with the actual IP

    • Installing the TIDB Cluster

Pre-installation Inspection

Execute the following command if all server returns TIDB indicates that SSH trusted configuration was successful.

$ ansible-i inventory.ini all-m shell-a ' WhoAmI '

Execute the following command if all server returns root to indicate that the TIDB user sudo password-free configuration was successful.

$ ansible-i inventory.ini all-m shell-a ' WhoAmI '-b

Download the installation package

$ Ansible-playbook Local_prepare.yml

The latest TIDB package is automatically downloaded to the downloads directory after execution

Modifying the system environment, modifying kernel parameters

$ Ansible-playbook Bootstrap.yml

Modifications will do some testing, will also prompt some errors, such as the CPU is not enough core, if not the key issue, you can continue directly.

Deploying a cluster based on Inventory.ini

$ Ansible-playbook Deploy.yml

Deployment takes a long time to look at the log output, and when the following words appear, the deployment succeeds. If failed is not 0, you can re-execute the deployment, and if multiple deployments are unsuccessful, you need to look at the cause of the error.

. ***.**.62              : ok=59   changed=28   unreachable=0    failed=0   ***.***.**.63              : ok=60   changed=29   unreachable=0    failed=0   ***.***.**.64              : ok=60   changed=29   unreachable=0    failed=0   ***.***.**.65              : ok=32   changed=15   unreachable=0   failed=0 ***.***.** .              : ok=66   changed=28   unreachable=0    failed=0   ***.***.**.67              : ok=66   changed=   unreachable=0    failed=0   ***.***.**.68              : ok=66   changed=28   unreachable=0    Failed=0   ***.***.**.69              : ok=85   changed=48   unreachable=0    failed=0   localhost                  : Ok=1    changed=0    unreachable=0    failed=0   congrats! All goes well. :-)

Configuring the JDK for Spark

If you use Tispark, you need to configure the JDK for the corresponding server. First, put the JDK installation package on the console and then pass it to the server where Tispark is located.

If it can be passed directly, it may not be used in this way

$ cd/home/tidb$ mkdir Software

Transfer the JDK files to this directory first, jdk-8u91-linux-x64.tar.gz

Create the/OPT/JDK directory on the TIKV server. Then go back to the central control machine using SCP for transmission

SCP jdk-8u91-linux-x64.tar.gz [email protected]***.**.**.62:/opt/jdk/scp jdk-8u91-linux-x64.tar.gz [email protected] . **.**.63:/OPT/JDK/SCP jdk-8u91-linux-x64.tar.gz [Email protected]***.**.**.64:/opt/jdk/

Please replace * * * with the actual IP

Use ROOT to switch to 62, 63, 643 servers, the JDK decompression, and environment variable configuration. Take 62 as an example

$ ssh 172.18.100.62$ su-# cd/opt/jdk/# tar zxvf jdk-8u91-linux-x64.tar.gz# vi/etc/profile add JDK configuration at the last location of the file. Doneexport java_home=/opt/jdk/jdk1.8.0_91export path= $JAVA _home/bin: $PATHexport classpath=.: $JAVA _home/lib/dt.jar : $JAVA _home/lib/tools.jarunset iunset-f Pathmunge

Verify that the JDK installation is successful

# Source/etc/profile
# java-version
# SU-TIDB
# java-version

    • Starting and stopping TIDB clusters

Use TIDB login to start the cluster

$ cd/home/tidb/tidb-ansible$ Ansible-playbook start.yml

The following output appears, which indicates a successful start

. ***.**.62              : ok=18   changed=3    unreachable=0    failed=0   ***.***.**.63              : ok=18   changed =3    unreachable=0    failed=0   ***.***.**.64              : ok=18    changed=3 unreachable=0 failed= 0   ***.***.**.65              : ok=14   changed=2    unreachable=0    failed=0   ***.***.**.66              : ok=   changed=4    unreachable=0    failed=0   ***.***.**.67              : ok=18   changed=4    Unreachable=0    failed=0   ***.***.**.68              : ok=18   changed=4    unreachable=0    failed=0   ***.***.**.69              : ok=31   changed=10   unreachable=0    failed=0   localhost                  : ok=1    changed=0    unreachable=0    failed=0   congrats! All goes well. :-)

Log in using TIDB, stop the cluster

$ cd/home/tidb/tidb-ansible$ Ansible-playbook stop.yml

The following output appears, representing the stop success

. ***.**.62              : ok=18   changed=2    unreachable=0    failed=0   ***.***.**.63              : ok=18   changed =2    unreachable=0    failed=0   ***.***.**.64              : ok=18    changed=2 unreachable=0 failed =0   ***.***.**.65              : ok=13   changed=1    unreachable=0 failed=0 ***.***.**.66              : ok=   changed=3    unreachable=0    failed=0   ***.***.**.67              : ok=17   changed=3    Unreachable=0    failed=0   ***.***.**.68              : ok=17   changed=3    unreachable=0    failed=0   ***.***.**.69              : ok=20   changed=4    unreachable=0    failed=0   localhost                  : ok=1    changed=0    unreachable=0    failed=0   congrats! All goes well. :-)
    • Check cluster status

Testing with the MySQL client tool

mysql -u root -h ***.***.**.66 -P 4000
mysql -u root -h ***.***.**.67 -P 4000
mysql -u root -h ***.***.**.68 -P 4000

Access the monitoring platform through the browser

Address: http://***.***.**.69:3000 Default account password is: admin/admin

To this platform installation is complete, Tispark currently has not been tested, after the test to supplement this part of the content.

Install TIDB cluster online

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.