Mongodb sharding Mode Chapter

Source: Internet
Author: User
Tags color representation mongodb sharding

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/75/7C/wKiom1Y6B0_Dw6WaAAFNY7nY7CI980.jpg "style=" float: none; "title=" 1.png.jpg "alt=" Wkiom1y6b0_dw6waaafny7ny7ci980.jpg "/>

8 Machine, 4 each shard and its respective role assignment:

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/75/7A/wKioL1Y6B47QK48aAAJNby2Synk073.jpg "style=" float: none; "title=" 2.png "alt=" Wkiol1y6b47qk48aaajnby2synk073.jpg "/>

Note: The same color representation on the same server in the table above can actually be a single machine for each member, but like arbiter (Quorum node),configserver (Configuration server),MONGOs (routing) itself consumes resources that are not very large and can be reused, but a server is best to run only one mongod . So we used a server that only ran a mongod and 1 to 2 Other group members, note that it is not redundant to have more than one member of the same set on a single server.


Shard1

10.10.6.48 10.10.6.46

sudo mkdir-p/var/soft/data/shard1

sudo mkdir-p/var/soft/log

Sudo/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard1--bind_ip0.0.0.0--port 27040--dbpath/var/soft/data/ shard1/-logpath/var/soft/log/shard1.log--logappend--nojournal--oplogsize=4096--fork--noprealloc

10.10.6.90

sudo mkdir-p/var/soft/data/arbiter1/

sudo mkdir-p/var/soft/log

/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard1--bind_ip 0.0.0.0--port 27000--dbpath/var/soft/data/ arbiter1/-logpath/var/soft/log/arbiter1.log--logappend--nojournal--oplogsize=4096--fork

in the 10.10.6.46 on the execution

Config = {_id: ' Shard1 ', Members: [{_id:0, Host: ' mongodb46:27040 '}, {_id:1, Host: ' mongodb48:27040 '},{_id:2, Host: ' Mong odb90:27000 ', Arbiteronly:true}]}

Rs.initiate (config);

Shard2

10.10.6.90 10.10.6.91

sudo mkdir-p/var/soft/data/shard2

sudo mkdir-p/var/soft/log

Sudo/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard2--bind_ip0.0.0.0--port 27050--dbpath/var/soft/data/ shard2/-logpath/var/soft/log/shard2.log--logappend--nojournal--oplogsize=4096--fork--noprealloc

10.10.6.92

sudo mkdir-p/var/soft/data/arbiter2/

sudo mkdir-p/var/soft/log

/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard2--bind_ip 0.0.0.0--port 27000--dbpath/var/soft/data/ arbiter2/-logpath/var/soft/log/arbiter2.log--logappend--nojournal--oplogsize=4096--fork

in the 10.10.6.91 on the execution

Config = {_id: ' Shard2 ', Members: [{_id:0, Host: ' mongodb90:27050 '}, {_id:1, Host: ' mongodb91:27050 '},{_id:2, Host: ' Mong odb92:27000 ', Arbiteronly:true}]}

Rs.initiate (config);

Shard3

10.10.6.92 10.10.6.93

sudo mkdir-p/var/soft/data/shard3

sudo mkdir-p/var/soft/log

Sudo/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard3--bind_ip0.0.0.0--port 27060--dbpath/var/soft/data/ shard3/-logpath/var/soft/log/shard3.log--logappend--nojournal--oplogsize=4096--fork--noprealloc

10.10.6.94

sudo mkdir-p/var/soft/data/arbiter3/

sudo mkdir-p/var/soft/log

/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard3--bind_ip 0.0.0.0--port 27000--dbpath/var/soft/data/ arbiter3/-logpath/var/soft/log/arbiter3.log--logappend--nojournal--oplogsize=4096--fork

in the 10.10.6.93 on the execution

Config = {_id: ' Shard3 ', Members: [{_id:0, Host: ' mongodb92:27060 '}, {_id:1, Host: ' mongodb93:27060 '},{_id:2, Host: ' Mong odb94:27000 ', Arbiteronly:true}]}

Rs.initiate (config);

Shard4

10.10.6.94 10.10.6.95

sudo mkdir-p/var/soft/data/shard4

sudo mkdir-p/var/soft/log

Sudo/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard4--bind_ip0.0.0.0--port 27070--dbpath/var/soft/data/ shard4/-logpath/var/soft/log/shard4.log--logappend--nojournal--oplogsize=4096--fork--noprealloc

10.10.6.46

sudo mkdir-p/var/soft/data/arbiter4/

sudo mkdir-p/var/soft/log

/var/soft/mongodb2.2/bin/mongod--shardsvr--replset shard4--bind_ip 0.0.0.0--port 27000--dbpath/var/soft/data/ arbiter4/-logpath/var/soft/log/arbiter4.log--logappend--nojournal--oplogsize=4096--fork

in the 10.10.6.95 on the execution

Config = {_id: ' Shard4 ', Members: [{_id:0, Host: ' mongodb94:27070 '}, {_id:1, Host: ' mongodb95:27070 '},{_id:2, Host: ' Mong odb46:27000 ', Arbiteronly:true}]}

Rs.initiate (config);

// configuration configserver:

in 10.10.6.4810.10.6.93 10.10.6.95

sudo mkdir-p/var/soft/data/config

sudo mkdir-p/var/soft/log/config

/var/soft/mongodb2.2/bin/mongod--bind_ip 0.0.0.0--fork--configsvr--port 20000--dbpath/var/soft/data/config-- Logpath/var/soft/log/config/config.log--logappend

// Add mongos

sudo mkdir-p/var/soft/log/mongos

10.10.6.90:

/var/soft/mongodb2.2/bin/mongos--port 30000--configdb Mongodb48:20000,mongodb93:20000,mongodb95:20000--logpath/ Var/soft/log/mongos/mongos.log--logappend--fork

10.10.6.48:

/var/soft/mongodb2.2/bin/mongos--port 30000--configdb Mongodb48:20000,mongodb93:20000,mongodb95:20000--logpath/ Var/soft/log/mongos/mongos.log--logappend--fork

10.10.6.92:

/var/soft/mongodb2.2/bin/mongos--port 30000--configdb Mongodb48:20000,mongodb93:20000,mongodb95:20000--logpath/ Var/soft/log/mongos/mongos.log--logappend--fork

in 10.10.6.90

/var/soft/mongodb2.2/bin/mongo--port 30000

Db.runcommand ({addshard: "shard1/mongodb46:27040,mongodb48:27040", Name: "Shard1", maxsize:504800});

Db.runcommand ({addshard: "shard2/mongodb90:27050,mongodb91:27050", Name: "Shard2", maxsize:504800});

Db.runcommand ({addshard: "shard3/mongodb92:27060,mongodb93:27060", Name: "Shard3", maxsize:504800});

Db.runcommand ({addshard: "shard4/mongodb94:27070,mongodb95:27070", Name: "Shard4", maxsize:504800});

>db.runcommand ({listshards:1})

If you have listed the above two shards, indicating that shards has been configured successfully

Db.printshardingstatus ();

Establish Shard according to the requirements of research and development personnel :

/var/soft/mongodb2.2/bin/mongos--port 30000--configdbmongodb48:20000,mongodb93:20000,mongodb95:20000--logpath/ Var/soft/log/mongos/mongos.log--logappend--fork

Db.runcommand ({moveprimary: "Recsys0", To: "Shard1"})

Db.runcommand ({moveprimary: "Recsys1", To: "Shard2"})

Db.runcommand ({moveprimary: "Recsys2", To: "Shard3"})

Db.runcommand ({moveprimary: "Recsys3", To: "Shard4"})

Shard Settings:

since we are using manual shards, not auto-sharding, we do not need to execute a command like Db.runcommand ({enablesharding: library name }) . We are just building 4 libraries on 4 different shard . As to which library the read and write operation will correspond, it is judged by the program.

then build the library when you want to login MONGOs to build, build good one library,MONGOs from you to assign is based on which Shard , but if you are automatically assigned to Shard Not satisfied can run the command to move it to another shard .

For example: I set up a library called Recsys0 Library on the MONGOs . Run db.printshardingstatus () command we can see this recsys0 is built to which Shard,if built to Shard4, you feel dissatisfied, You want to change to shard1, then you can run the command,Db.runcommand ({moveprimary: "Recsys0", To: "Shard1"}), the library will be from Shard4 Move to the Shard1 . Can be checked again by Db.printshardingstatus () .

Maintenance aspects:

each shard is a replica set replica set, so the problem is handled the same as the previous replica set .

Problems that have occurred:

Recsys1 Library is clearly configured on the Shard2 , but shard1 also write a small amount of data, rather strange, so in shard1 with Mongosniff--source NET eth0--port 27040 to grab queryand see where the query that was written to the RECSYS1 library came from, and the discovery came from a mongos, Other MONGOs are normal, this mongos directly use RECSYS1 to enter is shard1, but the execution Db.printshardingstatus () did not find abnormal, suspect is a bug, turned off the MONGOs service, solve the problem.

This article is from the "Zhangdh Open Space" blog, so be sure to keep this source http://linuxblind.blog.51cto.com/7616603/1709791

Mongodb sharding Mode article

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.