MongoDB provides the replica pairs mode to start the database. After the database is started in this mode, the database automatically negotiates who is the master and who is the slave. Once a database server loses power, the other server automatically takes over and becomes the master from that moment on. In case of an error in the other server in the future, the master status will be changed back to the first server.
(10.7.3.95-> 10.7.3.97 single replication does not have shard)
Replication-Sets
Step 1. Enable mongod on two servers and add the replset = xxx option.
Step 2. Select a server and configure replication.
var cfg = {_id:"dzhRepl", members:[ {_id:0, host:"10.7.3.95:10000"}, {_id:1, host:"10.7.3.97:10000"} ]}rs.initiate(cfg)
The initialization is complete, and the data will be migrated to secondary.
1. Rs. slaveok () enables secondary to query data as well.
2. Rs. Stepdown () to reduce primary to secondary
Master-slave:
Introduction: http://www.mongodb.org/display/DOCS/Master+Slave
Note the following parameters:
Master configfile:
---
Bind_ip = 10.7.3.95
Port = 10000
Fork = true
Master = true
Logappend = true
Journal = true
Dbpath = ../data/
Logpath = ../log/MongoDB. Log
Directoryperdb = true
Slave configfile:
----
Bind_ip = 10.7.3.97
Port = 10000
Fork = true
Logappend = true
Slave = true # Start slave
Only = testdb # synchronize only the testdb Database
Source = 10.7.3.95: 10000 # Master host and Port
Autoresync = true # The synchronization operation is automatically executed in the event of an accident
Slavedelay = 10 # Set the update frequency (s)
Journal = true
Dbpath = ../slave
Logpath = ../log/MongoDB. log.
Directoryperdb = true.
After the master and slave are started, the master will first create a local database, and there will be collection such as oplog. $ main and slaves, system. indexes, and system. Users.
Autosharding + replication-Sets
MongoDB includes an automatic sharding module "mongos" to build a large horizontally scalable Database Cluster, dynamically add servers, and automatically create a horizontally scalable database cluster system, store database sub-tables on each sharding Node
Here I use three servers for testing.
Are
10. X. x.21.163
10. X. x.21.164
10. X. x.228
............ (If a service fails, it may be due to a file lock problem in your data, or the name is incorrect)
Preparations:
Create a data directory for each machine
Server 1
^_^[root@:/usr/local/mongodb]#mkdir -p data/shard11^_^[root@:/usr/local/mongodb]#mkdir -p data/shard21
Server 2
^_^[root@:/usr/local/mongodb]#mkdir -p data/shard12^_^[root@:/usr/local/mongodb]#mkdir -p data/shard22
Server 3
^_^[root@:/usr/local/mongodb]#mkdir -p data/shard13^_^[root@:/usr/local/mongodb]#mkdir -p data/shard23
Then perform shard1 replica sets for each server:
Server1:
./mongod --shardsvr --replSet shard1 --port 27017 --dbpath ../data/shard11 --oplogSize 100 --logpath ../data/shard11.log --logappend --fork.
Server2:
./mongod --shardsvr --replSet shard1 --port 27017 --dbpath ../data/shard12 --oplogSize 100 --logpath ../data/shard12.log --logappend --fork.
Server3:
./mongod --shardsvr --replSet shard1 --port 27017 --dbpath ../data/shard13 --oplogSize 100 --logpath ../data/shard13.log --logappend --fork.
Initialize replica set:
> config={_id:'shard1',members:[... {_id:0,host:'10.X.X.228:27017'},... {_id:1,host:'10.X.X.163:27017'},... {_id:2,host:'10.X.X.164:27017'}]... }
rs.initiate(config);
Configure replica sets for shard2.
Server1:
./mongod --shardsvr --replSet shard2 --port 27018 --dbpath ../data/shard21 --oplogSize 100 --logpath ../data/shard21.log --logappend --fork.
Server2:
./mongod --shardsvr --replSet shard2 --port 27018 --dbpath ../data/shard22 --oplogSize 100 --logpath ../data/shard22.log --logappend --fork.
Server3:
./mongod --shardsvr --replSet shard2 --port 27018 --dbpath ../data/shard23 --oplogSize 100 --logpath ../data/shard23.log --logappend --fork.
After the first initialization, the following error occurs when you run the client:
If you want to use 27018, you must specify
./Mongo 10. X. x.228: 27018
Initialize the replica set again:
> config={_id:'shard2',members:[... {_id:0,host:'10.X.X.228:27018'},... {_id:1,host:'10.X.X.163:27018'},... {_id:2,host:'10.X.X.164:27018'}]... }
rs.initiate(config);
Now there are 2 replica sets and 2 shards
.......................
Configure three more config servers
mkdir -p data/config./mongod --configsvr --dbpath ../data/config --port 20000 --logpath ../data/config.log -- logappend --fork.
Each server runs this once (AH. A lot of configuration .....)
Configure mongos again (each machine must run again)
./mongos --configdb 10.X.X.228:20000,10.X.X.163:20000,10.X.X.164:20000 --port 30000 - -chunkSize 5 --logpath ../data/mongos.log --logappend --fork
Config shard cluster again
Connect to mongos and switch to Admin
./Mongo 10. X. x.228: 30000/admin
> DB
Admin
Join shards again
db.runCommand({addshard:"shard1/10.7.3.228:27017,10.10.21.163:27017,10.10.21.164:27017",name:"s1",maxsize:20480}); db.runCommand({addshard:"shard2/10.7.3.228:27018,10.10.21.163:27018,10.10.21.164:27018",name:"s2",maxsize:20480});
Enable a database
db.runCommand({ enablesharding:"test" })
Dataset partitioning
DB. runcommand ({shardcollection: "test. Users", key: {_ ID: 1 }})
List added shards
> db.runCommand({listshards:1})
View sharding Information
printShardingStatus ()
View shard storage Information (use test first)
db.users.stats()
> use test switched to db test> db.users.stats(){ "sharded" : true, "ns" : "test.users", "count" : 0, "size" : 0, "avgObjSize" : NaN, "storageSize" : 8192, "nindexes" : 1, "nchunks" : 1, "shards" : { "s1" : { "ns" : "test.users", "count" : 0, "size" : 0, "storageSize" : 8192, "numExtents" : 1, "nindexes" : 1, "lastExtentSize" : 8192, "paddingFactor" : 1, "flags" : 1, "totalIndexSize" : 8192, "indexSizes" : { "_id_" : 8192 }, "ok" : 1 } }, "ok" : 1}
Next we will test the parts:
#include <iostream>#include <mongo/client/dbclient.h>using namespace std;using namespace mongo;#define INIT_TIME \ struct timeval time1,time2; \#define START_TIME \ gettimeofday(&time1,NULL); \#define STOP_TIME \ gettimeofday(&time2,NULL); \#define PRINT_TIME \ cout<<"Time:"<<time2.tv_sec-time1.tv_sec<<":"<<time2.tv_usec-time1.tv_usec<<endl;int main() { srand(time(NULL)); char ar[26+1]; DBClientConnection conn; conn.connect("10.7.3.228:30000"); cout<<"MongoDB Connected OK!"<<endl; int count=10000000; INIT_TIME; START_TIME;//insert#if 1 while (count--) { //loop insert 1000W for (int i=0; i<26; i++) { ar[i] = rand()%26+97; } ar[26]='\0'; BSONObj p = BSON("NewsId"<<ar); conn.insert("test.users",p); }#endif//Query#if 0 cout<<"Count:"<<conn.count("News.News_key")<<endl; BSONObj emptyObj; auto_ptr<DBClientCursor> cursor = conn.query("News.News_key",emptyObj); while (cursor->more()) { BSONObj p = cursor->next(); cout<<p.getStringField("NewsId")<<endl; if (p.getStringField("NewsId")=="bfatckiyxlougsxrffsnylsfuo"){ cout<<"find"<<endl; break; } }#endif STOP_TIME; PRINT_TIME; return 0;}