Reference: http://wengzhijuan12.blog.163.com/blog/static/3622414520137104257376/
http://wengzhijuan12.blog.163.com/blog/static/3622414520137104257376/
http://blog.csdn.net/shmnh/article/details/41976451
Replication of the upgraded version of master-slave replication, which implements the automatic failover function, and from the node support read
One, node type:
A) Master node: supports read and write
b) from node: Support read (required setting)
c) Quorum node: Participation in voting also supports read (required setting)
Second, the experiment
Master node: 192.168.129.47
From node: 192.168.129.48
Quorum node: 192.168.129.49
1. The master node is configured as follows:
Vi/etc/rc.local
Rm/usr/mongodb/log/mongodb.log
/usr/mongodb/bin/mongod--dbpath=/usr/mongodb/data/--logpath=/usr/mongodb/log/mongodb.log--port 27017--replSet test/192.168.129.48:27017--maxconns=2000--fork–logappend
From the node configuration is as follows:
Vi/etc/rc.local
Rm/usr/mongodb/log/mongodb.log
/usr/mongodb/bin/mongod--dbpath=/usr/mongodb/data/--logpath=/usr/mongodb/log/mongodb.log--port 27017--replSet test/192.168.129.47:27017--maxconns=2000--fork–logappend
The quorum node is configured as follows:
Vi/etc/rc.local
Rm/usr/mongodb/log/mongodb.log
/usr/mongodb/bin/mongod--dbpath=/usr/mongodb/data/--logpath=/usr/mongodb/log/mongodb.log--port 27017--replSet test/192.168.129.47:27017,192.168.129.48:27017--fork–logappend
Start Mongod Service after configuration is complete
2. Execute on Master node after start (192.168.129.47)
Use admin
Db.runcommand ({"Replsetinitiate": {
"_id": "Test",
"Members": [
{
"_id": 0,
"Host": "192.168.129.47:27017"
},
{
"_id": 1,
"Host": "192.168.129.48:27017"
} ,
{
"_id": 2,
"Host": "192.168.129.49:27017"
}
]}})
#查看复制集状态
Rs.status ()
Rs.ismaster ()
Rs.conf ()
#查看从库状态
Db.printslavereplicationinfo ()
#设置从库可查询
Db.getmongo (). Setslaveok ()
Rs.setslaveok ()
#增加复制集节点
1. Lock the existing one from the library and write the data in the cache to disk
Use admin
Db.runcommand ({"Fsync": 1, "Lock": 1})
2. Copy is locked from the library's data file to the new data directory from the library
3. Unlock from Library
Db. $cmd. Sys.unlock.findOne ()
Db.currentop ()
4. Start the new library
./mongod--replset rs1--keyfile/data/set/key/r4--fork--port 28014--dbpath/data/set/r4--logpath=/data/set/log/ R4.log--logappend--fastsync
5.rs.add ("localhost:27017")
6. Delete the node:
Execute Rs.remove ("Ip:port") above the master node
The
secondary node in the replica set is not readable by default. in applications where there is less write-read, the replica sets is used to achieve read-write separation. By specifying Slaveok at the time of connection, or by specifying the secondary in the main library, the pressure of reading is shared by the primary and only the write operation is assumed.
If you access MONGO through the shell, you want to query in secondary. The following error will appear:
Imageset:secondary> db.fs.files.find () Error: {"$err": "Not Master and Slaveok=false", "Code": 13435} There are two ways to implement a query from the machine: first Method: Db.getmongo (). Setslaveok (); the second method: Rs.slaveok (); However, there is a disadvantage of this approach, the next time through the MONGO into the instance, the query will still error, for this can be done by the following ways
VI ~/.mongorc.js
Add a line Rs.slaveok (), so that each time through the MONGO command to enter can be queried if it is accessed through Java secondary, the following exception will be reported Com.mongodb.MongoException:not talking To master and retries used up
There are many ways to solve this problem.
The first method: Call Dbfactory.getdb () in Java code. SLAVEOK ();
The second approach: calling in Java code
Dbfactory.getdb (). Setreadpreference (Readpreference.secondarypreferred ());//In replication focus first read secondary, If secondary can't access it, read it from master.
Or
Dbfactory.getdb (). Setreadpreference (Readpreference.secondary ()), or//Read only from secondary, and cannot be queried if the secondary is not accessible
The third method: adding slave-ok= "true" when configuring MONGO also supports reading directly from secondary
<mongo:mongo id= "MONGO" host= "${mongodb.host}" port= "${mongodb.port}" >
<mongo:options slave-ok= "true"/>
</mongo:mongo>
With the rise of web2.0, the application of high concurrency large data volume is becoming more and more obvious, and the traditional relational database is weak in this aspect. A spear has its own shield, and the presence of a memory db compensates for the shortcomings of the traditional relational db. The current popular memory db mainly has Redis, Memcach, MongoDB. The preceding two are stored in key-value form, and MongoDB is based on some of the characteristics of relational database tables and supports indexing. MongoDB is a good choice for scenarios where there is a requirement for large data volumes and data correlation.
Replica set is a copy cluster scheme for MongoDB, which is superior to the traditional master-slave method of database. Traditional master-slave mode, Master is responsible for reading and writing, Slaver is responsible for synchronizing data from Master, once the master down, slaver is obsolete, this way in disaster preparedness, and the replica set of MongoDB cluster mechanism to solve this flaw.
Replica Set:
Mainly divided into: primary (master node, provide additions and deletions of the service), Slaver (Spare node, only provide read), arbiter (quorum node, not store data, only responsible for arbitration).
Process: Client reads and writes data from primary node, slaver synchronizes data from primary,arbiter when primary down A healthy slaver replacement primary will be selected from many slaver nodes within 10 seconds to mitigate the disaster . Arbiter The node itself does not store data, but only monitors the operation of primary and slaver in the cluster (if the arbiter goes down, the entire cluster is wasted, the only disadvantage ). Slaver only provides read function, cannot write, our project query request can go to connect slaver node, thus greatly reduce the load of primary main node.
The following is a flowchart of the Replica Set :
Replica Set Principle We understand, you may ask, when we are programming, for primary, slaver so many db, we must be to the primary node to write data, If the primary node is down, how should the program be detected, and how to find a new primary node?
Don't worry, MongoDB has solved your doubts. MONGODB provides driver support for all languages, just call the Replica set interface and use it with reference to the instructions below, with the node. JS
var Db = require (' MongoDB '). Db
Server = require (' MongoDB '). Server,
Replset = require (' MongoDB '). Replset;
Cluster Server address
var serveraddr = {
9001: ' 192.168.1.100 ',//Node 1
9002: ' 192.168.1.100 ',//Node 2
9003: ' 192.168.1.100 '//Node 3
}
Collection of cluster Sever objects
var servers = [];
for (var i in serveraddr) {
Servers.push (New Server (Serveraddr[i], parseint (i)));
}
var replstat = new Replset (servers, {});
var db = new db (' blog ', Replstat);
MongoDB operations
Db.open (function (err, db) {
var collection = db.collection (' user ');
Querying a document
Collection.findone ({
Name: ' Jerry '
}, function (err, results) {
Console.info (' query: ', results);
});
Insert a document
Collection.insert ({
Name: ' OK ',
Age:28
}, function (err, results) {
Console.info (' Insert: ' + results);
});
});
The above configuration of a few nodes 9001, 9002, 9003, we do not have to pay attention to which is the main node, standby node, blanking node, the driver will automatically determine a healthy master node to give node, we just concentrate on writing the database operation logic can be.
But there is a problem here, Replica set in the Switch node, there will be a suspended period, we know that node is asynchronous/O, in this suspended period, if node in a large number of operations, the weak stack memory overflow, reported: Rangeerror:maximum call Stack size exceeded error, this error is a system-level error, will cause the app to collapse, even if the catch exception or when the DB switch is complete, the program will still hang dead where. No solution has been found, the MONGO-driven API is being researched, attempting to resolve an event that reflects the state of the switching process, and if the event is triggered, stop the DB operation and resume after the switch is complete, which should solve the problem.
MongoDB replica set (replica set)