Installation of TIDBTIDB is a pingcap company by GoogleSpanner/ F1 paper-inspired and designed open source distributed HTAP (Hybrid transactional and analytical Processing) database, combining the best features of traditional RDBMS and NoSQL. TIDB is compatible with MySQL and supports unlimited levels of scale, with strong consistency and high availability. (official website) use the MySQL client tool to connect to the TIDB. The tidb and MySQL command usage are the same. No high learning costs, development costs. TIDB is strongly recommended to deploy with Ansible in the current website. Ansible can be deployed quickly and easily. To get a better understanding of the entire architecture, you can deploy it manually once. TIDB Architecture (image from official website):Installing TIDBansible mode download: Git clone https://github.com/pingcap/tidb-ansible.gitcd tidb-ansibleansible-playbook Local_ PREPARE.YMLCD Downloads (you can see the downloaded TIDB installation package, including tools, etc.). ) or http://download.pingcap.org/tidb-latest-linux-amd64-unportable.tar.gz or link: https://pan.baidu.com/s/1hkQ_ Fsbxtzjzfxgx2sryga Password: fj16 manual download completed TIDB installation package and decompression, will look at the TIDB directory schema as follows: In addition to the TIDB official website also provides a number of tools such as (Checker dump_region loader syncer. ) can be used to export MySQL data and import, according to Binlog real-time synchronization and so on.Installing TIDB3 types of nodes need to be installed. TIPD. Tikv. Tidb. The boot sequence is: TIPD "tikv" tidb. TIPD node: Manage the node, manage the metadata, and dispatch the data of the TIKV node evenly. TIKV node: Stores the data node and can set up multiple replicas for interoperability. TIDB node: Client link, compute node. There is no data, no state. 1, installation Environment: System: centos6.6 (official website recommended is CENTOS7. CENTOS6 need to upgrade GLIBC library to more than 2.17) disk: Tidb is optimized for SSDB only, we recommend using SSDB. Go environment: Suggest go version go1.10.2 linux/amd64 above (note: TIDB is written for go language, need to configure go environment. Self-configuring. compilation requires go. Binary decompression can be used without installing go) 2, directory file specification: According to my installation habits, the first will be installed directory, data directory, file name planning incidentally: Install to/usr/local. Extracted directory:/usr/local/tidb-2.0.4ln-s/usr/local/tidb-2.0.4 tidb conf Drop profile: The configuration file is the Tipd_ port number. conf tikv_ port number. conf tidb_ port number. Co Nf. Tools to prevent tidb other additional tools. Data Catalog: mkdir/data_db3/tidb/"DB, KV, PD" port Node Planning: 3 Machines (192.168.100.73,192.168.100.74,192.168.100.75) TIPD cluster: 192.168.100.73,192.168.100.74,192.168.100.75TIKV data node: 192.16 8.100.73,192.168.100.74,192.168.100.75TIDB node: 192.168.100.753. Configure the TIPD node (cluster mode):/usr/local/tidb-2.0.4/conf/tipd_4205.confclient-urls= "http://192.168.100.75:4205"name= "Pd3"Data-dir= "/data_db3/tidb/pd/4205/" peer-urls= "http://192.168.100.75:4206" initial-cluster= "pd1=http:// 192.168.100.74:4202,pd2=http://192.168.100.73:4204,pd3=http://192.168.100.75:4206"log-file= "/data_db3/tidb/pd/4205_run.log" description: Name specified must be the same as the "pd3=" that initializes the cluster name. The PD node has 3 types of ports: the Peer-urls port is the port used for communication between cluster TIPD clusters. Health checks and so on. Client-urls why the port used for TIKV communication. Log-file is the specified log file, there is a very strange rule is that the log file directory can not be placed in the Data-dir directory below. Set up profile startup:/usr/local/tidb-2.0.4/bin/pd-server--config=/usr/local/tidb-2.0.4/conf/tipd_4205.conf initializes the first TIPD when it starts. If the TIPD node error is not a function. Reinitialization requires that the directory node content data-dir be deleted first. (PD1 PD2 node is installed with PD3.) After the installation completes the TIPD node. Log in to one of the nodes of the Pd-ctl (client URL). View cluster information:./pd-ctl-u http://192.168.100.75:4205 Help view more commands, command Help to view the options information for the command. See Member for more action, such as Help member. You can delete TIPD nodes, leader priority, and so on. Config show looks at the cluster configuration and health check between heath nodes. 4, install TIKV node: tikv node is the node that really holds the data. The parameter tuning is more on the TIKV node. Configuration file:The main requirements for configuration information:/usr/local/tidb-2.0.4/conf/tikv_4402.conflog-level = "info" log-file = "/data_db3/tidb/kv/4402/run.log" [server]addr = "192.168.100.74:4402" [Storage]data-dir = "/data_db3/tidb/kv/4402" scheduler-concurrency = 1024000scheduler-worker-pool-size = 100
#labels = {zone = "ZONE1", host = "10074"}[PD] #指定tipd节点 specified here are tipd client-urlsendpoints = ["192.168.100.73:4203", "192.168.100.74:4201", " 192.168.100.75:4205 "] [metric]interval =" 15s "address =" job = "TIKV" [Raftstore]sync-log = Falseregion-max-size = "384MB" region-split-size = "256MB" [rocksdb]max-background-jobs = 28max-open-files = 409600max-manifest-file-size = "20MB" compaction-readahead-size = "20MB" [rocksdb.defaultcf]block-size = "64KB" Compression-per-level = ["No", "no", "lz4", "Lz4", "Lz4", "zstd", "zstd"]write-buffer-size = "128MB" Max-write-buffer-number = 10level0-slowdown-writes-trigger = 20level0-stop-writes-trigger = 36max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [rocksdb.writecf]compression-per-level = ["No", "no" "," Lz4 "," Lz4 "," Lz4 "," zstd "," zstd "]write-buffer-size =" 128MB "Max-write-buffer-number = 5min-write-buffer-number-to-merge = 1max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [Raftdb] Max-open-files = 409600compaction-readahead-size = "20MB" [Raftdb.defauLtcf]compression-per-level = ["No", "no", "lz4", "Lz4", "Lz4", "zstd", "zstd"]write-buffer-size = "128MB" Max-write-buffer-number = 5min-write-buffer-number-to-merge = 1max-bytes-for-level-base = "512MB" Target-file-size-base = "32MB" block-cache-size = "10G" [Import]import-dir = "/data_db3/tidb/kv/4402/import" Num-threads = 8stream-channel-window = 128# (parameters are personal, not line-up optimized.) )
Note:When multiple tikv are installed on a single machine, labels can be placed to prevent replicas from being stored on the same machine. Tikv-server--labels zone=<zone>,rack=<rack>,host=
Start tikv:/usr/local/tidb-2.0.4/bin/tikv-server--config=/usr/local/tidb-2.0.4/conf/tikv_4402. conf boot is complete without an error, you can go to the PD-CTL Cluster management tool to see if TIKV joins the KV node of the cluster store./pd-ctl-u http://192.168.100.75:4205? store { "Store": { "id": 30, "Address": "192.168.100.74:4402", "State_name": "Up" }, "status": { "Capacity": "446 GiB", "Available": ", GiB" "Leader_count": 1301, "leader_weight": 1, "Leader_score": 307618, "Leader_ Size ": 307618, " Region_coUnt ": 2638, " region_weight ": 1, "Region_score": 1073677587.6132812, "Region_ Size ": 615726, " Start_ts ":" 2018-06-26t10:33:17+08:00 ", "Last_heartbeat_ts": "2018-07-17t11:27:17.074373767+08:00", "uptime": "504h54m0.074373767s" &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;} } 5, configure the TIDB node: The TIDB node is the client link processing and compute nodes. Startup is generally after the TIPD node and the TIKV node, otherwise it cannot be started. /usr/local/tidb-2.0.4/conf/tidb_4001.conf configuration file Detailed description: Main parameter description: host = "0.0.0.0" port = 4001# Storage type specified as TIKV.store = "TIKV"#指定tipd节点. All that is specified here is the TIPD Client-urlspath = "192.168.100.74:4201,192.168.100.73:4203,192.168.100.75:4205" socket = "" RUN-DDL = Truelease = "45s" split-table = Truetoken-limit = 1000oom-action = "Log" enable-streaming = Falselower-case-table-names = 2[ Log]level = "info" log-file = "/data_db3/tidb/db/4001/tidb.log" format = "text" Disable-timestamp = Falseslow-query-file = "" Slow-threshold = 300expensive-threshold = 10000query-log-max-len = 2048[log.file]filename = "" max-size = 300max-days = 0 max-backups = 0log-rotate = True[security]ssl-ca = "" "Ssl-cert =" "Ssl-key =" "Cluster-ssl-ca =" "Cluster-ssl-cert =" "Clus Ter-ssl-key = "" [Status]report-status = True
status-port = 10080 #报告tidb状态的通讯端口METRICS-ADDR = "" Metrics-interval = 15[performance]max-procs = 0stmt-count-limit = 5000tcp-keep-alive = Truecross-join = t Ruestats-lease = "3s" run-auto-analyze = truefeedback-probability = 0.05query-feedback-limit = 1024pseudo-estimate-ratio = 0.8[proxy-protocol]networks = "" Header-timeout = 5[plan-cache]enabled = Falsecapacity = 2560shards = 256[prepared-plan-cache]enabled = falsecapacity = 100[opentracing]enable = Falserpc-metrics = false[ Opentracing.sampler]type = "Const" param = 1.0sampling-server-url = "" max-operations = 0sampling-refresh-interval = 0[ Opentracing.reporter]queue-size = 0buffer-flush-interval = 0log-spans = Falselocal-agent-host-port = "" [Tikv-client] Grpc-connection-count = 16commit-timeout = "41s" [txn-local-latches]enabled = Falsecapacity = 1024000[binlog] Binlog-socket = "" Start TIDB. /usr/local/tidb-2.0.4/bin/tidb-server--config=/usr/local/tidb-2.0.4/conf/tidb_4001.conf. Startup Tidb found a problem. Is that the log parameter log-file does not take effect (at this moment) can be started:/usr/local/tidb-2.0.4/bin/tidb-server--config=/usr/local/tidb-2.0.4/conf/tidb_4001.conf --log-file=/data_db3/tidb/db/4001/tidb.log After the installation is complete TIDB. So far. Can be viewed through the MySQL client tool. TIDB content information inside. The command is basically the same as MySQL. The encapsulated view is similar to the MySQL phase. MySQL-compatible protocol. One word. is to use the TIDB when MySQL used. Initialization Tidb has root account, no password. Mysql-h 192.168.100.75-uroot-p 4001. TIDB installation is complete. How to synchronize MySQL data from various places to TIDB? 3 tools: Mydumper+loader+syncer For example: To synchronize the following database from mysql (192.168.100.56 3345) in real-time "test1", "Test2", "Test3", "MYTAB1" 1, Mydumper+loader get log points after importing data. &NBSP;2, syncer configuration file. 100.56_3345.toml# basic information, specify synchronization rules, filter rules, and so on. Log-level = "info" Server-id = 101# Specifies the log point for synchronization. Meta = "/usr/local/tidb-2.0.4/tools/syncer/100.56_3345.meta" Worker-count = 16batch = 10status-addr = "127.0.0.1:10097" Skip-ddls = ["^drop\\s"]replicate-do-db = ["Test1", "Test2", "Test3", "MYTAB1"] #源mysql链接 [from]host = "192.168.100.56" user = "Tidbrepl" password = "xxxxxx" port = 3345 #tidb链接 [to]host = "192.168.100.75" user = "root" Password = "" Port = 40 01 /usr/local/tidb-2.0.4/tools/syncer/100.56_3345.meta configuration file: Binlog-name = "mysql-bin.000089" Binlog-pos = 1070520171binlog-gtid = "" #gtid可以先不填写, when synchronized, the file is refreshed every time, and will fill in the GITD information. Start Sync:/usr/local/tidb-2.0.4/bin/syncer-config/usr/local/tidb-2.0.4/tools/syncer/100.78_3317.toml >>/tmp/logfilexxxxx. It is recommended to save the log file when you start up. /tmp/logfilexxxxx. can retrieve the log points when replication fails.
Manual installation of TIDB