The Replica Setsmongodb supports failover and redundancy across multiple machines through asynchronous replication. At the same time in multiple machines, only one is used for write operations. It is for this reason that MongoDB provides the assurance of data consistency. Playthe machine of the Primary character can distribute the read operation to the slave. The structure of the Replica sets is very similar to a cluster. Because it does the same thing as a cluster implementation, if a node fails, the other nodes will immediately connect to the business without downtime. The following example describes the deployment process of the cluster, as well as the common points of attention and errors in the deployment process this environment is the Linux operating system, MongoDB version: Mongodb-linux-x86_64-2.6.1.tgz,vmwre virtual machine, virtual machine IP : 192.168.169.129, the cluster simulates three servers with a different port on the machine. 1. Cluster is divided into three nodes Master master node, slaver standby node, arbiter quorum node to establish Data folder
Mkdir-p/mongodb/data/mastermkdir-p/mongodb/data/slavermkdir-p/mongodb/data/arbiter
PS: three directories corresponding to the main, standby, quorum node 2. Create a configuration folder 1) master.conf Open Editor:
Vi/etc/master.conf
Press I to enter the following configuration
Dbpath=/home/mongodb/data/master logpath=/home/mongodb/log/master.loglogappend=truereplset=rep1port= 10000fork=truejournal=true
Press ESC when finished: >>wq>> Enter
2) slaver.conf Editor to open and save the above steps, write only the details below
dbpath=/home/mongodb/data/slaverlogpath=/home/mongodb/log/slaver.loglogappend=truereplset=rep1port=10001fork= Truejournal=true
3) arbiter.conf
Dbpath=/home/mongodb/data/arbiterlogpath=/home/mongodb/log/arbiter.loglogappend=truereplset=rep1port=10002fork =truejournal=truesmallfiles=true
Parameter explanation:
DBPath: Data Storage Directory
LogPath: Log Storage path
Logappend: Logging in Append mode
Replset:replica Set's name
The port number used by the PORT:MONGODB process, which defaults to 27017
Fork: Run the process in the next stage mode
Journal: Write Log
Smallfiles: Add this parameter when insufficient prompt space
Other parameters
Pidfilepath: Process files, easy to stop MongoDB
Directoryperdb: Set up a folder for each database according to the database name
IP address that is bound by Bind_ip:mongodb
Oplogsize:mongodb the maximum size of the operation log file. MB, default to 5% of the hard disk's remaining space
Noprealloc: No pre-allocated storage
3. Start MongoDB
Cd/home/mongodb/bin
Start the service
./mongod-f/etc/master.conf./mongod-f/etc/slaver.conf./mongod-f/etc/arbiter.conf
There is a hint that the boot was successful.
If the following prompt indicates that the startup failed
There are many reasons for startup failure, check the configuration file, and if there are no errors, open the appropriate profile to view detailed error messages
Cat/etc/master.conf
One of the most common errors is that there is not enough disk space to prompt such errors
Because the MongoDB log file is growing in 2g, so the space required is larger, then you can add such a configuration smallfiles=true in the configuration file. After all three services have been successfully started
4. Configure primary (master), Standby (slaver), quorum (arbiter) nodes
You can connect MongoDB via client, or you can choose a connection to mongodb directly from three nodes.
./mongo 192.168.169.129:10000 #ip和port是某个节点的地址
>use Admin
>cfg={_id: "Rep1", members:[{_id:0,host: ' 192.168.169.129:10000 ', priority:2}, {_id:1,host: ' 192.168.169.129:10001 ', Priority:1},{_id:2,host: ' 192.168.169.129:10002 ', arbiteronly:true}]};>rs.initiate (cfg ) #使配置生效 { "set": "Rep1", "date": Isodate ("2014-09-05t02:44:43z"), & nbsp "MyState": 1, "members": [ &NBS P { "_id": 0, "name": "192.168.169.129:10000", &NBS P "Health": 1, &NBS P "state": 1, &NBSP ; "Statestr": "PRIMARY", &NBsp "uptime": 200, &N Bsp "Optime": Timestamp (1357285565000, 1), "optimedate": Isodate ("2013-01-04t07:46:05z"), &NBS P "self": true }, &N Bsp { "_ ID ": 1, " name ":" 192.168.169.129:1000 1 ", " Health ": 1, & nbsp "state": 2,   "STATESTR": "Secondary", "uptime": 200, "Optime" : Timestamp (1357285565000, 1), "Optime Date ": Isodate (" 2013-01-04t07:46:05z "), &NBSP ; "Lastheartbeat": Isodate ("2013-01-05t02:44:42z"), &N Bsp "Pingms": 0 }, &NB Sp { "_id": 2, "name": "192.168.169.129:10002", &NBS P   "Health": 1, &NBS P "state": 7, "STATESTR": "Arbiter", "uptime": 200,   ; "lastheartbeat": Isodate ("2013-01-05t02:44:42z"), &N Bsp "Pingms": 0 &NBSP ; } ], "OK": some other errors may occur during the 1} configuration, but can be viewed The corresponding log file to be resolved.
MongoDB cluster build process and common errors