2. add, delete, and manage multi-node Master/Slave clusters in Windows 7 of mongodb cluster. mongodbwin7
Reprinted please indicate Source: http://blog.csdn.net/tianyijavaoracle/article/details/41744557
I. mongo Replica Sets three nodes are deployed to copy data between the master and slave nodes. Different from the sharding, after one node is lost, other nodes can continue to work.
Ii. File Configuration
For example:
Iii. Configuration Service
1. Connect to port 4001 and configure members
2. initialize the configuration
3. The cluster has been completed. log on to 4001 again and insert data.
4. In the synchronized Vue, we can see that all three nodes have the data just inserted.
5. view the cluster status. Health: 1 indicates normal, 0 indicates exception. PRIMARY indicates the master database
6. view the cluster status in addition. The cluster IP address and other information are displayed.
7. You can view the Master/Slave operation logs of the cluster. Ts indicates the timestamp, op indicates the operation, ns indicates the collection name, and o indicates the data.
8. View operation log information
9. Check the slave Database Synchronization status and synchronize the world at last.
10. view the node information of the entire cluster
3. add or delete nodes in cluster management
2. read/write splitting
Execute db. getMongo (). setSlaveOk () to enable the slave database to have the READ function. This allows the Master/Slave read/write splitting.
3. failover when we stop the master database, the status of the query is found to be 4002 changed to the master database
4. Restore to slave database. When 4001 mongo is re-enabled, the original master database is now changed to slave database.
5. Copy the node data and deploy a new node to quickly add nodes. Use -- fastsync to start the node and add nodes using rs. add.
The added pai_5.bat content is as follows:
Mongodb_5 \ bin \ mongod -- replSet rs1 -- keyFile data \ key \ r4 -- port 4005 -- dbpath data/r4 -- logpath = data \ log \ r4.log -- logappend -- fastsync
6. Use rs. remove ("ip: port") to remove the node. Check the cluster status again and find that the node has been removed.