Replica sets+http://www.aliyun.com/zixun/aggregation/14273.html ">sharding architecture is as follows:
1,shard server: Use replica sets to ensure that each data node has the ability to backup, automatic fault-tolerant transfer, and automatic recovery.
2, configuring the server: Using 3 configuration servers to ensure metadata integrity
3, routing process: Using 3 routing processes to achieve balance, improve client access performance, the architecture is as follows
3 Fragmented processes: SHARD11,SHARD12,SHARD13 consists of a replica set that provides SHARD1 functionality in sharding.
3 Fragmented processes: Shard21,shard22,shard23 consists of a replica set that provides SHARD2 functionality in sharding.
3 Configuration server processes and 3 router processes
--------------------------------------------------------------------------------------------
Now we start building the entire architecture (because there are not many machines, I still use the local directory to simulate the machine)
Host
ip
Services and Ports
ServerA
Mongodb1
Mongodb2
Mongodb3
Mongodb4
127.0.0.1
Mongod shard11:10000
Mongod shard21:20000
Mongod Config 1:30,000
mongos:40000
ServerB
Mongodb5
Mongodb6
Mongodb7
Mongodb4
127.0.0.1
Mongod shard12:10001
Mongod shard22:20001
Mongod Config 2:30,001
mongos:40000
ServerC
Mongodb9
Mongodb8
Mongodb11
Mongodb4
127.0.0.1
Mongod shard13:10002
Mongod shard23:20002
Mongod Config 3:30,002
mongos:40000
1, start the SHARD1 process and configure replica Sets
Start Mongod shard11 process, replica set name: Shard1
Start the Mongod shard12 process and set the replica set: Shard1
Start the Mongod shard13 process and set the replica set: Shard1
The three processes are configured as replica sets, and a new cmd is used to perform various non-boot commands, connect to any of the three processes above, and configure them as a replica set, operating as follows
2, start the SHARD2 process and configure replica Sets
Start Mongod shard21 process, replica set name: Shard2
Start the Mongod shard22 process and set the replica set: Shard2
Start the Mongod shard23 process and set the replica set: Shard2
Configure these three processes as a replica set, as follows
The fragments that are sufficient to the two replica sets are already configured to complete, with Config server and route process configured below
3, configure 3 config Server
4 Configuring Route Process
Chunk size of 1M, convenient for us to test the effect.
5 Configuration of fragmented table and slice keys
I used the Friends Library of the Frienduser table to do fragmentation, the key is _id, because the cmd width is too small to add fragmented command display is not complete, I manually listed them to
Add a Fragment
Db.runcommand ({addshard: "shard1/127.0.0.1:10000,127.0.0.1:10001,127.0.0.1:10002"})
Db.runcommand ({addshard: "shard2/127.0.0.1:20000,127.0.0.1:20001,127.0.0.1:20002"})
To this entire framework has been configured to complete, we have to verify the configuration of the situation, I through the client to add 10,000 data to the database
You can see that the fragment has been executed.
------------------------------------------------------------------------------------------------
Now do the disaster test, I stop shard11 and see what happens.
Open the Shard11 cmd window, CTRL + C to stop the process
View down status
The state is intact, I'm inserting 20,000 data to see the effect
You can see that you can still run.
Here will be the case: when there are three machines to do the replica set, can only be a server when off, when there are two units when off, the third can not be changed from the library to the main library.
Here you should pay attention to the electoral rules of the replica set: When the main library is dropped, the secondary node will trigger the election. Receive a replica set the first node for most members to vote becomes the primary node. The most important feature of the replica set election is that most of the original member nodes of the replica set must participate in the election to succeed. If your replica set contains three members, and two or three nodes can connect to each other, the replica set can elect a master node. If two nodes are offline in the replica set, the remaining node will still be the secondary node.