Mongodb,nosql technology implementation, based on the Distributed file storage database, written by the C + + language. Mainly to solve the problem of access efficiency of massive data, and provide scalable and high-performance database storage solution for Web application.
How MongoDB clusters are implemented:
1, Replica set: Also known as the replica set, simply speaking, the server in the cluster contains the multi-point data, to ensure that the main node is hung off. The standby node can continue to provide services, but the premise is that the data must be consistent with the master node, such as:
650) this.width=650; "style=" float:left; "src=" http://s3.51cto.com/wyfs02/M02/88/4F/ Wkiom1furd2gmqgjaacnmo2ybhg952.png "title=" image 1.png "alt=" Wkiom1furd2gmqgjaacnmo2ybhg952.png "/>
MongoDB (M) represents the primary node, and MongoDB (S) represents the quorum node from the node, MongoDB (A);
The M-node stores the data and provides all the enhancement services, the S-node does not provide the service by default, but can provide the query service by setting up the standby node, which can reduce the pressure on the primary node. A node is a special node, does not provide any services, it is only an election function, when the primary node is down, you can select the S node through a node to promote the primary node.
2, Master-slave: Similar to MySQL master-slave mode, configuration is very simple, here I will not say more
3. Sharding: Similar to replica set, a quorum node is also required, but sharding also needs to configure the server and routing nodes, such as:
650) this.width=650; "src=" http://s1.51cto.com/wyfs02/M00/88/50/wKiom1fuUnKQnMufAAE3wSUhVcw250.png "title=" Image 3. PNG "alt=" Wkiom1fuunkqnmufaae3wsuhvcw250.png "/>
MongoDB (R) is the routing node (MONGOs), the entrance of the database cluster request, all requests are coordinated by MONGOs, it is a request dispatcher, he is responsible for forwarding the corresponding data request to the corresponding Shard server.
MongoDB (C1) for the configuration of the server, stored the meta-information of the cluster, the meta-information saved the State and organization of the cluster, the meta-information contains the data block information stored in each shard and the scope of each data block, MONGOs will cache this information used to do read and write routing distribution!
This article is mainly to achieve the third sharding mode, the schema deployment diagram is as follows 650) this.width=650; "Src=" http://s1.51cto.com/wyfs02/M00/88/4C/ Wkiol1fuvlhxsfrzaacxfs7ssja064.png "title=" image 8.png "alt=" Wkiol1fuvlhxsfrzaacxfs7ssja064.png "/>
Deployment environment:
Host Name
|
Ip
|
Node1
|
192.168.1.109
|
Node2
|
192.168.1.107
|
Node3
|
192.168.1.110
|
First, plan the port for each service
Config server:11000 route: 10000shard:10001shard:10002shard:10003
Second, create the corresponding directories on Node1, Node2, Node3 (the following operations are performed under MongoDB user)
[[email protected] ~]# cat /etc/passwd | grep mongodbmongodb:x:10001:10001: :/data/mongodb:/bin/bash[[email protected] ~]# su - mongodb[[email protected] ~]$ pwd/data/mongodb[[email protected] ~]$ mkdir -p config/{data,log} # #config server data, log path [[email protected] ~]$ mkdir -p mongos/ log # #路由的日志路径 [[email protected] ~]$ mkdir -p shard1/{data,log} # #副本集1的数据, log path [[email protected] ~]$ Mkdir -p shard2/{data,log}[[email protected] ~]$ mkdir -p shard3/{data,log} [[email protected] ~]$ tar -xf mongodb-linux-x86_64-rhel62-3.2.7.tgz # #这里用的是3.2.7 version, the current official website is 3.2.9[[email protected] ~]$ lldrwxr-xr-x 4 mongodb dev 4096 Sep 30 20:50 configdrwxr-xr-x 3 mongodb dev 4096 Sep 30 20:55 mongodb-linux-x86_64-rhel62-3.2.7-rw-r--r-- 1 mongodb dev 74938432 sep 30 20:40 mongodb-linux-x86_64- Rhel62-3.2.7.tgzdrwxr-xr-x 3 mongodb dev 4096 sep 30 20:50 mongosdrwxr-xr-x 4 mongodb dev 4096 sep 30 20:50 shard1drwxr-xr-x 4 mongodb dev 4096 sep 30 20:51 shard2drwxr-xr-x 4 mongodb dev 4096 sep 30 20:51 shard3## #node2 and Node3 are also so
Third, start the Node1, Node2, node3 configuration server
[Email protected] bin]$ Pwd/data/mongodb/mongodb-linux-x86_64-rhel62-3.2.7/bin[[email protected] bin]$./mongod--co Nfigsvr--port 11000--dbpath/data/mongodb/config/data/--logpath/data/mongodb/config/log/config.log--fork # # "-- Fork "runs in the background and executes on node1 [[email protected] bin]$./mongod--configsvr--port 11000--dbpath/data/mongodb/config/data/-- Logpath/data/mongodb/config/log/config.log--fork # #在node2上执行 [[email protected] bin]$./mongod--configsvr--port 1100 0--dbpath/data/mongodb/config/data/--logpath/data/mongodb/config/log/config.log--fork # #在node3上执行
Iv. Start routing for Node1, Node2, Node3
[Email protected] bin]$/mongos--configdb 192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port 10000 --logpath/data/mongodb/mongos/log/mongos.log--fork # #node1启动路由 [[email protected] bin]$./mongos--configdb 192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port 10000--logpath/data/mongodb/mongos/log/ Mongos.log--fork # #node2启动路由 [[email protected] bin]$./mongos--configdb 192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port 10000--logpath/data/mongodb/mongos/log/ Mongos.log--fork # #node3启动路由
Set up Shard and start on Node1, Node2, Node3
./mongod--shardsvr--replset shard1--port 11001--dbpath/data/mongodb/shard1/data--logpath/data/mongodb/shard1/log /shard1.log--fork--oplogsize 10240--logappend # #设置shard1./mongod--shardsvr--replset shard2--port 11002--dbpath/ Data/mongodb/shard2/data--logpath/data/mongodb/shard2/log/shard2.log--fork--oplogsize 10240--logappend # # Set Shard2./mongod--shardsvr--replset shard3--port 11003--dbpath/data/mongodb/shard3/data--logpath/data/mongodb/ Shard3/log/shard3.log--fork--oplogsize 10240--logappend # #设置shard3 # # #node2, node3 operation is also true
Six, log on any one machine, the corresponding port to the Shard configuration
1. Configure the Shard1
[Email protected] bin]$/mongo--port 11001 # #连接至副本集shard1 > Use admin > config = {_id: "Shard1", members:[{_id:0, Host: "192.168.1.109:11001"},{_id:1,host: "192.168.1.107:11001"},{_id:2,host: "192.168.1.110:11001", ArbiterOnly: True}]} # # "Arbiteronly" set who is the quorum node > rs.initiate (config); # #对shard1进行初始化
2. Configure the Shard2
[Email protected] bin]$/mongo--port 11002 # #连接至副本集shard2 > Use admin> config2 = {_id: "Shard2", members:[{_id:0,h OST: "192.168.1.109:11002"},{_id:1,host: "192.168.1.107:11002", Arbiteronly:true},{_id:2,host: " 192.168.1.110:11002 "}]}> rs.initiate (CONFIG2); # #对shard2进行初始化
3. Configure the Shard3
[Email protected] bin]$/mongo--port 11003 # #连接至副本集shard3 > Use admin> config3 = {_id: "Shard3", members:[{_id:0,h OST: "192.168.1.109:11003", Arbiteronly:true},{_id:1,host: "192.168.1.107:11003"},{_id:2,host: " 192.168.1.110:11003 "}]}> rs.initiate (CONFIG3); # #对shard3进行初始化 # # #注意: This machine configuration Shard, you can not set the machine to "arbiter", otherwise it will error, you must go to other node settings. # # #在配置shard1, SHARD2 are configured on Node1, because the quorum node is NODE3, Node2, respectively. When Node1 is the quorum node, it must go to Node2 or node3 to configure
After you have configured the replica set, you also need to concatenate the routes with the replica set, because all requests are routed and then to the configuration server
[Email protected] bin]$/mongo--port 10000 # #连接至mongosmongos > Use adminmongos> db.runcommand ({addshard: "Shard1 /192.168.1.109:11001,192.168.1.107:11001,192.168.1.110:11001 "}); # #将路由和副本集shard1串联起来 {"shardadded": "Shard1", "OK": 1}mongos> Db.runcommand ({addshard: "shard2/ 192.168.1.109:11002,192.168.1.107:11002,192.168.1.110:11002 "}); # #将路由和副本集shard2串联起来 {"shardadded": "Shard2", "OK": 1}mongos> Db.runcommand ({addshard: "shard3/ 192.168.1.109:11003,192.168.1.107:11003,192.168.1.110:11003 "}); # #将路由和副本集shard1串联起来 {"shardadded": "Shard3", "OK": 1}
Eight, detection configuration is
1. Connect to MONGOs to view SH
[[email protected] bin]$./mongo--port 10000mongos> use amdinmongos> sh.status () shards: {"_id": "Shard1", "Host": "shard1/192.168.1.107:11001,192.168.1.109:11001"} {"_id": "Shard2", "host": "Shard2/192.168.1.109:1100 2,192.168.1.110:11002 "} {" _id ":" Shard3 "," host ":" Shard3/192.168.1.107:11003,192.168.1.110:11003 "}
2. Connect to each Shard Port view RS
[[Email protected] bin]$ ./mongo --port 11001shard1:primary> rs.status ()] "name" : "192.168.1.109:11001", "Health" : 1, "State" : 1, "Statestr" : "PRIMARY", "name" : "192.168.1.107:11001", "Health" : 1, "State" : 2, "Statestr" : "secondary", "name" : "192.168.1.110:11001", " Health " : 1,  &NBsp; "State" : 7, "Statestr" : "Arbiter", # #其他节点查看方式相同
Nine, insert data, test data can be automatically fragmented
[[email protected] bin]$ ./mongo --port 10000mongos> use adminmongos> db.runcommand ( { enablesharding : "TestDB"}), # #创建数据库, specifying the database "TestDB" for the Shard to take effect { "OK" : 1 }mongos> db.runcommand ({shardcollection: "Testdb.table1", key:{apId:1,_id : 1}) # #MongoDB具有很多片键, where the ID slice key is created, specifying the data in the "Table1" table in the "TestDB" database is partitioned { "collectionsharded" by the ID chip key : "Testdb.table1", "OK" : 1 }mongos> use testdb;switched to db testdbmongos> for (var i=1;i<100000;i++) db.table1.save ({id:i, "test1": "Testval1"}) # 10w id[[email protected] bin]$ ./mongo --port #在 table "table1" in "TestDB" 11003 # #node2上连接至shard3shard3:P rimary> use testdbswitched to db testdbshard3:primary> db.table1.find () { "_id" : objectid (" 57ef6d2e4eec2b8ef67ad1ce "), " id " : 1, "Test1" : "Testval1" }{ "_id" : objectid ("57EF6D2E4EEC2B8EF67AD1CF"), "id" : 2, "test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2e4eec2b8ef67ad1d1 "), " id " : 4, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2e4eec2b8ef67ad1d2"), "id" : 5, "test1" : "Testval1 " }{ " _id " : objectid (" 57ef6d2e4eec2b8ef67ad1d6 "), " id " : 9, " test1 " : "Testval1" }{ "_id" : objectid ("57ef6d2e4eec2b8ef67ad1d9"), "id" &NBSP;: 12, "Test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2e4eec2b8ef67ad1db "), " id " : 14, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2e4eec2b8ef67ad1dd"), "id" : 16, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2e4eec2b8ef67ad1de "), " id " : 17, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2e4eec2b8ef67ad1e0"), "id" : 19, "test1" : "Testval1" }{ "_id" : objectid ("57ef6d2e4eec2b8ef67ad1e3"), "id" : 22, "test1" : "Testval1" }{ "_id" : objectid ("57ef6d2e4eec2b8ef67ad1e4"), "id" &NBSP;: 23, "Test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2f4eec2b8ef67ad1e5 "), " id " : 24, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2f4eec2b8ef67ad1e8"), "id" : 27, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad1ea "), " id " : 29, "Test1" : "Testval1" }{ "_id" : objectid ("57ef6d2f4eec2b8ef67ad1eb"), "id" : 30, "Test1" : "TestVal1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad1ee "), " id " : 33, " test1 " : "Testval1" }{ "_id" : objectid ("57ef6d2f4eec2b8ef67ad1ef"), "id" &NBSP;: 34, "Test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2f4eec2b8ef67ad1f2 "), " id " : 37, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2f4eec2b8ef67ad1f4"), "id" : 39, "test1" : " Testval1 " } [[email protected] bin]$ ./mongo --port 11002 # #node3上连接至shard2shard2:P Rimary> use testdbswitched to db testdbshard2: Primary> db.table1.find () { "_id" : objectid ("57ef6d2e4eec2b8ef67ad1d4"), "id" : 7, "Test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2e4eec2b8ef67ad1d8 "), " id " : 11, " test1 " : "Testval1" }{ "_id" : objectid ("57ef6d2e4eec2b8ef67ad1da"), "id" &NBSP;: 13, "Test1" : "Testval1" }{ "_id" : objectid (" 57EF6D2E4EEC2B8EF67AD1DF "), " id " : 18, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2e4eec2b8ef67ad1e2"), "id" : 21, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad1e9 "), " id " : 28, "Test1" : "Testval1" }{ "_id" : objectid ("57ef6d2f4eec2b8ef67ad1f0"), "id" : 35, "Test1" : "Testval1" }{ "_id" : objectid (" 57EF6D2F4EEC2B8EF67AD1FA "), " id " : 45, " test1 " : " Testval1 " }{ " _id " : objectid ("57EF6D2F4EEC2B8EF67AD1FC"), "id" : 47, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad1Fe "), " id " : 49, " test1 " : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad200 "), " id " : 51, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2f4eec2b8ef67ad202"), "id" : 53, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad203 "), " id " : 54, "Test1" : "Testval1" }{ "_id" : objectid ("57ef6d2f4eec2b8ef67ad206"), "id" : 57, "Test1" : "Testval1" }{ "_id" : objectid (" 57ef6d2f4eec2b8ef67ad208 "), " id " : 59, " test1 " : " Testval1 " }{ " _id " : objectid ("57ef6d2f4eec2b8ef67ad204"), "id" : 55, "test1" : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad209 "), " id " : 60, "Test1" : "Testval1" }{ "_ID " : objectid (" 57ef6d2f4eec2b8ef67ad20c "), " id " : 63, " test1 " : " Testval1 " }{ " _id " : objectid (" 57ef6d2f4eec2b8ef67ad20f "), " id " : 66, "Test1" : "Testval1" }{ "_id" : objectid ("57ef6d2f4eec2b8ef67ad210"), "id" : 67, "Test1" : "Testval1" }## #node1上查看方式亦是如此 # # #注意: View data only on the primary node
Summary of issues:
1, each server in the cluster time must be consistent, or start MONGOs will appear "Error number 5"
2, in the configuration replica set, can only specify three nodes, the official website clearly pointed out.
3. When deploying a cluster, if you do not want to set "arbitrate", you can define the master reserve 4 by setting the priority in the replica set.
This article is from the "WTC" blog, so be sure to keep this source http://wangtianci.blog.51cto.com/11265133/1858287
MongoDB Cluster Solution-Shard Technology