MongoDB Fragment Configuration

Source: Internet
Author: User
Tags mongodb
Simple annotation: MONGOs routing process, application access MONGOs query to specific fragmentation, listening port default 27017 config Server routing table service, each with all chunk routing information shard for data storage fragmentation, Each piece can be a replica set (replica set)

Deploying fragmented clusters
#配置mongoDB shareding, the use of the hosts domain name instead of the IP address is very good, can be in many places, directly replace the migration server, IP address changed also does not matter #例如在迁移config服务器时, You only need to modify the configuration Config server's hosts binding in the MONGOs server
Step 1 start config server/usr/bin/mongod--configsvr--dbpath/data/mongodb/config/--logpath/data/mongodb/config/log/configdb.log--port 20000 #正式生产环境一般启动3个config Server, 3 was started for hot standby, three config server failed, three config server cluster becomes read-only cluster
Step 2 Start MONGOs/usr/bin/mongos--configdb 192.168.10.1:20000,192.168.10.2:20000,192.168.10.3:20000--logpath/data/mongodb/mongos /mongos.log #把所有要加入到分片的mongo配置文件都加入进来 step3 Start Fragmentation Mongod/usr/bin/mongod--shardsvr--replset rs1--dbpath/data/mongodb/data/--logpath/tmp/sharda.log--port 10000/usr/bin/mo Ngod--shardsvr--replset rs2--dbpath/data/mongodb/data/--logpath/tmp/sharda.log--port 10000
/usr/bin/mongod--shardsvr--replset rs3--dbpath/data/mongodb/data/--logpath/tmp/sharda.log--port 10000–directoryp Erdb
#分片是一个mongo副本集,--replset Rs1 is the name of the replica set, the production environment even if fragmentation is a single server, it is recommended that this set up to facilitate later expansion #上面是在3台不同的服务器上启动的三个副本集, name on the Rs1,rs2,rs3 # You can enable this command, –DIRECTORYPERDB
/usr/bin/mongod--shardsvr--dbpath/data/mongodb/data/--logpath/tmp/sharda.log--port 10000
#分片是一个普通的mongo服务器
step4 Add fragmentation in MONGOsMONGO 127.0.0.1/admin #用mongo Connection mongos, switch to admin library sh.addshard ("rs1/192.168.10.1:20000")
Sh.addshard ("rs2/192.168.10.2:20000")
Sh.addshard ("rs3/192.168.10.3:20000")
Sh.addshard ("rs3/192.168.10.3:20000", "allowlocal:1")
Sh.addshard ("shared4/10.26.79.89:27017,10.26.165.157:27017,10.26.165.112:27017") Db.runCommand ({addShard: " 127.0.0.1:27020 ", allowlocal:1}" #添加一个 replica set replica set as a fragmented #上面是添加了三个副本集, each replica set currently has only one member, making three slices # In fact, in the case of fragmentation there can be only one or two members in a replica set, not necessarily at least three #添加本地的分片时可能需要参数 "Allowlocal:1" Sh.addshard ("192.168.10.1:20000") #添加非replica Set as a fragment:

Step5 enable fragmentation for a databaseThe sh.enablesharding ("test") #这里只是标识这个test数据库可以启用分片, but in fact there is no fragmentation. STEP6 The collection to a fragmentSh.shardcollection ("Records.people", {"ZipCode": 1, "name": 1}) sh.shardcollection ("People.addresses", {"state": 1, "_ Id ': 1} ' Sh.shardcollection ("Assets.chairs", {"type": 1, "_id": 1}) #对某个库的某个表进行分片, a fragmented key can use a single field or multiple fields #这里分片用的ke Y, automatically becomes the index field of the table #至于具体哪一行分到哪一片了, only config profile knows #不像atlas可以明确的知道那行在哪个表里面 #对一个已经有数据的表进行分片时需要先建立索引, and then the Index field as a piecewise key to slice Db.alerts.ensureIndex ({_id: "hashed"}) sh.shardcollection ("Events.alerts", {"_id": "Hashed"}) Db.t3.ensureIndex ({AG E:1}, {backgroud:true}) #对alerts表进行建立索引操作, add a _id index field and generate a random hash value for the field, which can be fragmented according to the hash sharded key #hash sharded Key is to solve the problem of write scaling for sharded key in some cases. #_id字段是mongodb默认每行都有的
The correct position of the fragment sh.enablesharding ("Mexuegrowth") #先对数据库启用分片 Db.growth_user_record.ensureIndex ({"UserId": 1, "RecordID": 1}, { Name: "_idx_userid_recordid"}, {"Background": true}) #对需要分片的空的collection建立分片索引 sh.shardcollection ("Mexuegrowth.grow Th_user_record ", {" UserId ": 1," RecordID ": 1}) #在空的collection上启用分片 #最后将数据导入 #这样就可以保证分片一定成功, no matter how large the amount of data
How to select Shard Key
Shard Key needs to have a high cardinality, that is shard key need to have a lot of different values, easy data segmentation and migration. As far as possible with the application of fusion, so that MONGOs face query can be directly positioned to a shard random, this is to not allow a certain period of time insert request all concentrated on a separate slice, resulting in a single piece of writing speed to become the bottleneck of the entire cluster with Objectid as a shard Key when the randomness of the situation, objectid in fact by the process Id+timestamp + other factors, so a period of TIMESTAMP will be relatively concentrated but the randomness of high will have a side effect, is query isolation comparison can be hash Key increases randomness

How to view shard information Mount MONGOs Sh.status () sh.status ({verbose:true}) #需要看的详细一点 Sharding status---Sharding version: {"_id": 1, "Version": 3} Shards: {  "_id": "shard0000",   "host": "m0.example.net:30001"} {  "_id": "shard0001",   "host": "M3 . example2.net:50000 "} databases: { " _id ":" admin ",  " partitioned ": false,  " PRIMARY ":" config "} {  "_id": "Contacts",   "partitioned": true,   "PRIMARY": "shard0000"} foo.contacts Shard key: {"Zip": 1} chunks:s hard0001    2 shard0002    3 shard0000    2 {"zip": {"$minKey": 1}}-->> {"Zip": 56000} on:shard0001 {"T": 2, "I": 0} {"Zip": 56000}-->> {"Zip": 56800} on:shard0002 {"T": 3, "I": 4} { ' Zip ': 56800}-->> {"Zip": 57088} on:shard0002 {"T": 4, "I": 2} {"Zip": 57088}-->> {"Zip": 57500} on:shard0002 {"T": 4, "I": 3} {"Zip": 57500}-->> {"Zip": 58140} on:shard0001 {"T": 4, "I": 0} {"Zip": 58140}-->> {"Zip': 59000} on:shard0000 {"T": 4, "I": 1} {"Zip": 59000}-->> {"Zip": {"$maxKey": 1}} on:shard0000 {"T": 3, "I": 3} {  "_id": "Test",   "partitioned": false,   "PRIMARY": "shard0000"}

Backup cluster meta information#mongoDB的Balance是指分片数据时存储在config数据库里的数据同步 synchronization of data between #就是每个config servers Step1 disable balance process. Sh.getbalancerstate () #查看分片的Balancer复制状态 sh.setbalancerstate (false) #停止分片的Balancer复制

STEP2 shut down config server

STEP3 Backup Data folder

Step4 reboot config server Step5 enable balance process. Sh.setbalancerstate (True) #启动分片的Balancer复制


View balance Status
You can view the current balance process status by using the following command. First connect to any MONGOs use config Db.locks.find ({_id: "balancer"}). Pretty () {"_id": "Balancer", "process": "Mongos0.exa mple.net:1292810611:1804289383 ", State": 2, "ts": ObjectId ("4d0f872630c42d1978be8a2e"), "when": "Mon Dec 20 11:41:10 GMT-0500 (EST) "," Who ":" mongos0.example.net:1292810611:1804289383:balancer:846930886 "," Why ":" D Oing balance Round "} state=2 indicates that balance is in progress, this value is 1 configured balance time window before version 2.0
The balance can be specified within a certain period of time within one day through the Balance time window, and no other time shall be balance. Connect to any MONGOs use config db.settings.update ({_id: "balancer"}, {$set: {ActiveWindow: {start: "23:00", Stop: "6: "}}, True" This setting allows balance to be performed only from 23:00 to 6:00
You can also cancel the time window setting: Use config db.settings.update ({_id: "balancer"}, {$unset: {activewindow:true}})
Modify Chunk size
This is a global parameter. The default is 64MB. Small chunk can make different shard data volumes more balanced. But it will lead to more migration. Large chunk will reduce migration. The amount of data in different shard is uneven. This modifies the chunk size. First connect to any MONGOs Db.settings.save ({_id: "chunksize", Value: "256"}) units are MB
When will automatically balance
Each MONGOs process can launch a balance. There will only be one balance run at a time. This is because you need to compete for this lock: Db.locks.find ({_id: "balancer"}) balance only migrate one chunk at a time.
Set the maximum storage capacity on a fragment
There are two ways in which the first type is specified with the maxSize parameter when adding a fragment: Db.runcommand ({addshard: "example.net:34008", maxsize:125})
The second way is to modify the settings in the run: Use config db.shards.update ({_id: "shard0000"}, {$set: {maxsize:250}})
Delete a fragment
Any mongos on the connection STEP1 confirm that balancer is open
STEP2 Run Command: Db.runcommand ({removeshard: "Mongodb0"}) #mongodb0是需要删除的分片的名字. At this point the balancer process will begin migrating data on the fragments to be removed onto other slices.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.