Cluster construction
Only 3 servers, start to build MongoDB cluster is the main reference is http://www.lanceyan.com/tech/arch/mongodb_shard1.html, port settings are mongos for 20000, config Server is 21000, Shard1 is 22001, Shard2 is 22002, and Shard3 is 22003. The general idea is:
Start the Config service on each server
Start the MONGOs service on each server and specify the Config service address that each MONGOs service contains (3 Config services initiated in the previous step) https://docs.mongodb.com/manual/reference/ program/mongos/#cmdoption--configdb
On each server, start the Mongod instance for each shard or its replica https://docs.mongodb.com/manual/tutorial/deploy-replica-set/# Start-each-member-of-the-replica-set-with-the-appropriate-options
Log on to any server and configure the instance https://docs.mongodb.com/manual/reference/method/rs.initiate/of each shard's replica set to contain #example
Log in to MONGOs and add the boot-up shard https://docs.mongodb.com/manual/reference/command/addShard/to the MONGOs service #definition
Finally, the Shard function is turned on at the database level and data set level.
After a few steps above, the cluster is set up. (Image from Http://www.lanceyan.com/tech/arch/mongodb_shard1.html)
Principle, basic command and parameter description basic commands
Mongod--configsvr launches the Mongod instance as a config Server for a shard set. In this way, you can only write data to the database via Admin or config?
Mongod--fork to start the Mongod instance as a background process
Mongod--dbpath The directory of the specified instance
Mongod--logpath The log path of the specified instance
Mongod--shardsrv launches the instance as a "shard" of the "Shard set"
Mongod--replset The instance is an instance of the replica set
Mongod--oplogsize configures the size of the Oplog.rs collection in the local database (in units m). When an instance synchronizes, the Oplog.rs collection is used to store changes to the database when synchronization occurs between multiple instances. Once the table is established, it is fixed, and if a new change is made after filling, the subsequent modifications will overwrite the previous. Because the start setting is too small at the time of loading the cluster, and the subsequent insert operation is very much, causing the multi-instance synchronization to keep up with the oplog.rs table overwrite speed, resulting in too stale to update error. There is a procedure for changing the oplig.rs size later.
Mongod--port Specifies the port number that the instance starts
MONGOs MONGOs is for the Mogodb cluster of shards as a routing service, plainly, is an operation, I give you to specify which shards you go to execute.
MONGOs--configdb Specifies the config servers of the Shard set
MONGOs--dbpath Ibid.
MONGOs--logpath Ibid.
MONGOs--port Ibid.
MONGOs--fork Ibid.
Open a configuration service for the Shard set (one per machine, if there is corruption, the routing server can read other configuration services)
21000 --logpath/data/dbmongo/config/log/config.log--fork
Turn on the routing service for The Shard set (one per machine, if there is damage, the program can adjust the other routes continue to work)?
MONGOs --configdb ip1:21000, ip2:21000, IP3:2100020000 --logpath /data/dbmongo/mongos/log/mongos.log--fork
Open each Mongod db instance corresponding to each shard (there are 3 db instances on each machine)??
22001 --dbpath/data/dbmongo/shard1/data 1024022002 --dbpath/data/dbmongo /shard2/data 1024022003 --dbpath/data/dbmongo/shard3/data 10240
Log in to a Mongod data instance and initialize the replica set?
Mongo127.0.0.1:22001Use Admin config={_id:"Replica name", members:[{_id:0, Host:"ip1:22001", Arbiteronly:true}, {_id:1, Host:"ip2:22001"}, {_id:2, Host:"ip3:22001"}]} rs.initiate (config);
Add the corresponding replica set in the Shard?
127.0. 0.1:20000use admin "shard1/ip1:22001,ip2:22001,ip3:22001 "}); " shard2/ip1:22002,ip2:22002,ip3:22002 " }); " shard3/ip1:22003,ip2:22003,ip3:22003 "});
Open a Database Shard and Shard the collection??
Db.runcommand ({enablesharding:" database name "}); " database name. Collection name " 1}})
Change the Oplogsize value
Oplogsize is the size of the Oplog.rs collection in the local database set, and if the oplogsize setting is too small at startup, it cannot be increased once it is established. Once the replica database is down for a period of time, the main database may fill the oplog.rs table after the new changes to the table is overwritten, then the replica database will be in the recovering state, the log prompts too stale to catch up. This time you can:
1. Clear the replica database and restart the replica database instance so that the replica database will initially copy all the data in the primary database again.
2. Clear the replica database, copy the data from the main database and reset it. (Not tried)
3. Online modification of the value of oplogsize, the principle is to turn off the database, start into a single-instance mode, and then delete the oplog.rs and re-create a larger oplog.rs. This official document explains: Change the Size of the Oplog?
Change Step
- (optional) For primary nodes, to be reduced to non-primary nodes
Rs.stepdown ()
- Close the database
Use admin db.shutdownserver ()
- Start as a stand-alone DB instance
22004 --dbpath/data/dbmongo/shard2/data--logpath/data/dbmongo/shard2/log/shard2.log--fork
- (optional) backing up the oplog.rs file back to with mongodump?
' oplog.rs ' 22004
- Make a backup of the last update in the oplog.rs, create a new oplog.rs, and import the backup. (When synchronizing different replicas in the replica set, find the last local modification (this one), find this in the other oplog.rs, and update the subsequent changes)
Use Local db= Db.getsiblingdb ('Local') Db.temp.drop () Db.temp.save (db.oplog.rs.Find({}, {ts:1H:1} ).Sort({$natural:-1}). Limit (1). Next ()) Db.oplog.rs.drop () Db.runcommand ({create:"oplog.rs", capped:true, Siz10 *1024x768*1024x768*1024x768)}) Db.oplog.rs.save (Db.temp.findOne () )
- Close the database and start a copy of the replica set as a shard of the cluster.
Use admin 22002 --dbpath/data/dbmongo/shard2/data--logpath/data/dbmongo/shard2/log/ Shard2.log--fork
Too many open files error
This is most likely due to the UNIX system's resource usage limitations on the program, see document: Unix ulimit Settings?
Modify the method to modify/etc/security/limits.d/99-mongodb-nproc.conf
for Number of user ' s processes to prevent # accidental fork bombs. # See RHBZ # 432903 for 40966400064000* soft fsize Unlimited* Hard fsize Unlimited* Soft CPU Unlimited* Hard CPU unlimited6400064000
In addition to try not to root user to open Mongod instances, and so on, the specific changes have forgotten, generally so
Http://www.lanceyan.com/tech/arch/mongodb_shard1.html
Http://www.cnblogs.com/wilber2013/p/4154406.html
MongoDB cluster installation and some of the problems that are encountered