Work in the use of MONGO, the study of their own, summed up a bit, will encounter some problems, there is no place also ask you to guide the cattle.
Just a little bit.
Replica sets have multiple replicas to ensure fault tolerance, even if one copy hangs and many replicas exist
The master node is hung, and the entire cluster is automatically switched
When the replica node in the replica set is detected by the heartbeat mechanism after the primary node is hung off, the primary node's election mechanism is initiated in the cluster, and a new primary server is automatically elected
Let's take a look at the schema diagram of the MongoDB replica set: (Note: This picture is cut off online, better than I draw)
After the primary server is hung out, the schema is as follows:
Here is my deployment of the operation
I'm using a VMware 11 virtual machine.
System is centos-6.3-x86_64 system
The database used is mongodb-linux-x86_64-2.0.4
Three virtual systems are used
IP 192.168.216.128 192.168.216.129 192.168.216.130
The end items used are 10001 10002 10003, respectively.
192.168.216.128 when the replica set master node is other as replica set replica node
Start deployment
1, respectively, in each system using SU Switch user identity, such as:
2, remember to be sure to shut down the system firewall in each system, because in the later configuration of the replica set, the three systems will ask each other, if the firewall is not turned off, the configuration is not, this is the problem I encountered in the configuration
Command: Service iptables stop
3. Upload your downloaded MongoDB installer package to each system and unzip it
4. Create a replica set folder under each system, respectively
How to do this command please check permissions
Mkdir-p Replset/data
5. Start MongoDB on each system separately and use the Ps-aux|grep mongod command to see if it starts
Mongodb-linux-x86_64-2.0.4/bin/mongod--replset application--dbpath replset/data/--port 10001-logpath replSet/ Replset.log-fork
Mongodb-linux-x86_64-2.0.4/bin/mongod--replset application--dbpath replset/data/--port 10002--logpath replSet/ Replset.log-fork
Mongodb-linux-x86_64-2.0.4/bin/mongod--replset application--dbpath replset/data/--port 10003--logpath replSet/ Replset.log-fork
As one of the tangent graphs:
6. Initializing a replica set
Login to MongoDB in one of the systems
Command Mongodb-linux-x86_64-2.0.4/bin/mongo 192.168.216.128:10001
Using the Admin database
Such as:
Define the replica set configuration variable, here _id: "application" and the above command parameter "–replsetapplication" to keep
Config ={_id: "Application", Members:[{_id:0,host: "192.168.216.128:10001"},{_id:1,host: "192.168.216.129:10002"},{ _id:2,host: "192.168.216.130:10003"}]}
Commands and outputs
Initializing the replica configuration
Command: rs.initiate (config);
Output Result:
{
"Info": "Config now saved locally. Should come online in about a minute. ",
"OK": 1
}
If the output is as follows: Indicates that the 192.168.216.129:10002 port cannot be accessed please check if MongoDB on port 10002 of the system is started
Rs.initiate (config);
{
"ErrMsg": "Couldn ' t initiate:need all members up to initiate, not ok:192.168.216.129:10002",
"OK": 0
}
To view the status of a cluster node
命令:
rs.status();
输出结果如
At this point the entire replica set has been successfully built
7. Start testing
Connect to the database on the primary node
Command: Mongodb-linux-x86_64-2.0.4/bin/mongo 192.168.216.128:10001
Build the test database.
命令: use test;
往testdb表插入数据。
命令: db.testdb.insert({"mongodb":"mongodbtest"})
在副本节点 192.168.216.129 192.168.216.130 上连接到mongodb查看数据是否复制过来。
如我在130上查看命令如下
mongodb-linux-x86_64-2.0.4/bin/mongo 192.168.216.130:10003
进入后输入如下命令进入test数据库
use test
执行 db.testdb.find()查看表
报如下错:
error: { "$err" : "not master and slaveok=false", "code" : 13435 }
解决办法如下
输入 rs.slaveOk();
然后再执行如上查看表操作
#输出如下
{ "_id" : ObjectId("55bb3495ce19b28f7ea77e81"), "mongodb" : "mongodbtest" }
操作如下切图
8. Test the replica set failover function, that is, after the primary node is hung, the secondary node becomes the primary node
First stop the main node 128
I killed him directly.
On the Master node system:
Command: Ps-aux|grep Mongod isolate the port of MongoDB boot and kill
The port I found was 25305.
Command: Kill-9 25305
Connect to the other two systems into the MongoDB database, there will be one of the main node, identify whether it is a host point, such as: appears primary represents the primary node
See sample examples below
To view the status of the entire cluster, you can see that 128 is not up to the state.
Mongodb-linux-x86_64-2.0.4/bin/mongo 192.168.216.129:10002
Rs.status ();
The output is as follows
{
"Set": "Application",
"Date": Isodate ("2015-07-31t09:04:00z"),
"MyState": 1,
"Syncingto": "192.168.216.128:10001",
"Members": [
{
"_id": 0,
"Name": "192.168.216.128:10001",
"Health": 0,
"State": 8,
"Statestr": "(not Reachable/healthy)",
"Uptime": 0,
"Optime": {
"T": 1438332053000,
"I": 1
},
"Optimedate": Isodate ("2015-07-31t08:40:53z"),
"Lastheartbeat": Isodate ("2015-07-31t09:00:17z"),
"Pingms": 0,
"ErrMsg": "Socket Exception"
},
{
"_id": 1,
"Name": "192.168.216.129:10002",
"Health": 1,
"State": 1,
"Statestr": "PRIMARY",
"Optime": {
"T": 1438332053000,
"I": 1
},
"Optimedate": Isodate ("2015-07-31t08:40:53z"),
"Self": true
},
{
"_id": 2,
"Name": "192.168.216.130:10003",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 2044,
"Optime": {
"T": 1438332053000,
"I": 1
},
"Optimedate": Isodate ("2015-07-31t08:40:53z"),
"Lastheartbeat": Isodate ("2015-07-31t09:03:59z"),
"Pingms": 0
}
],
"OK": 1
}
Restart the primary node 128 discovery to secondary
This no longer enters the code itself can try
The replica set is ready.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Build a high-availability MongoDB cluster copy