Original MongoDB Integrated Example Two

Source: Internet
Author: User
Tags chmod

mongodb-sharding Deployment Scenarios

First, Deployment Environment

    1. Five hosts:
      1. amongoshard01:10.212.74.43
      2. amongoshard02:10.212.84.4
      3. amongoshard03:10.212.98.23
      4. amongoshard04:10.212.46.5
      5. amongoshard05:10.212.70.21
    2. Installation
      1. CentOS 6.5 System
      2. Mongodb-linux-x86_64-rhel62-3.0.2.tgz

Second, Deployment Scenarios

The goal of the scheme is to build two Shard (Shard1 and Shard2) to achieve data fragmentation, each of which consists of one replica set (one primary and two from one quorum). The AMONGOSHARD01 host turns on 27017 and 30000 ports as SHARD11 and MONGOs instance ports, and Amongoshard02 turns on 27017, 27018, 27019, and 30000 ports as SHARD12, SHARD13, Config and MONGOs instance ports, AMONGOSHARD03 host opening 27017 and 30000 ports as SHARD21 and MONGOs instance ports, Amongoshard04 on 27017, 27018, Ports 27019 and 30000 as Shard22, Shard23, config, and MONGOs instance ports, AMONGOSHARD05 open Ports 27017, 27018, 27019 as SHARD14, Shard24 and config instance ports.

Shard11 (main write operation: priority is 2), shard12 (from read operation: 1 for priority), SHARD13 (quorum), and Shard14 (from backup: first 0) make up a replica set shard1 as a shard shard1 ; Shard21 (main write operation: priority is 2), shard22 (from read operation: priority is 1), SHARD23 (quorum) and SHARD24 (from backup: priority 0) make up another replica set Shard2 as another Shard Shard2. The instance deployment is as follows:

1) on AMONGOSHARD01:

shard11:10.212.74.43:27017

mongos:10.212.74.43:30000

2) on AMONGOSHARD02:

shard12:10.212.84.4:27017

shard13:10.212.84.4:27018

config:10.212.84.4:27019

mongos:10.212.84.4:30000

3) on AMONGOSHARD03:

shard21:10.212.98.23:27017

mongos:10.212.98.23:30000

4) on AMONGOSHARD04:

shard22:10.212.46.5:27017

shard23:10.212.46.5:27018

config:10.212.46.5:27019

mongos:10.212.46.5:30000

5) on AMONGOSHARD05:

shard14:10.212.70.21:27017

shard24:10.212.70.21:27018

config:10.212.70.21:27019

The general overview is as follows:

This deployment scenario enables read-write separation of data and backup disaster recovery. Config is placed on different hosts or independent, the arbiter and slave are put together or independent, regardless of which host machine can provide services.

Third, Deployment Implementation

    1. MongoDB Installation:

Amongoshard01 (10.212.74.43) :

--#cd/opt/software

--#tar-ZXVF mongodb-linux-x86_64-rhel62-3.0.2.tgz

--#mv mongodb-linux-x86_64-rhel62-3.0.2/opt/mongodb3.2

--#useradd MongoDB

--#chmod-R Mongodb.mongodb/data/mongodb

--#su-mongodb

--$mkdir-P/DATA/MONGODB/DATA/SHARD11

Amongoshard02 (10.212.84.4) :

--#cd/opt/software

--#tar-ZXVF mongodb-linux-x86_64-rhel62-3.0.2.tgz

--#mv mongodb-linux-x86_64-rhel62-3.0.2/opt/mongodb3.2

--#useradd MongoDB

--#chmod-R Mongodb.mongodb/data/mongodb

--#su-mongodb

--$mkdir-P/DATA/MONGODB/DATA/SHARD12

--$mkdir-P/DATA/MONGODB/DATA/SHARD13

--$mkdir-P/data/mongodb/data/config

Amongoshard03 (10.212.98.23) :

--#cd/opt/software

--#tar-ZXVF mongodb-linux-x86_64-rhel62-3.0.2.tgz

--#mv mongodb-linux-x86_64-rhel62-3.0.2/opt/mongodb3.2

--#useradd MongoDB

--#chmod-R Mongodb.mongodb/data/mongodb

--#su-mongodb

--$mkdir-P/DATA/MONGODB/DATA/SHARD21

Amongoshard04 (10.212.46.5) :

--#cd/opt/software

--#tar-ZXVF mongodb-linux-x86_64-rhel62-3.0.2.tgz

--#mv mongodb-linux-x86_64-rhel62-3.0.2/opt/mongodb3.2

--#useradd MongoDB

--#chmod-R Mongodb.mongodb/data/mongodb

--#su-mongodb

--$mkdir-P/DATA/MONGODB/DATA/SHARD22

--$mkdir-P/DATA/MONGODB/DATA/SHARD23

--$mkdir-P/data/mongodb/data/config

Amongoshard05 (10.212.70.21) :

--#cd/opt/software

--#tar-ZXVF mongodb-linux-x86_64-rhel62-3.0.2.tgz

--#mv mongodb-linux-x86_64-rhel62-3.0.2/opt/mongodb3.2

--#useradd MongoDB

--#chmod-R Mongodb.mongodb/data/mongodb

--#su-mongodb

--$mkdir-P/DATA/MONGODB/DATA/SHARD14

--$mkdir-P/DATA/MONGODB/DATA/SHARD24

--$mkdir-P/data/mongodb/data/config

Configuration of 2.mongodb:

Amongoshard01 (10.212.74.43) :

--$vi shard11.conf

Shardsvr = True

Replset = Shard1

Port = 27017

DBPath =/data/mongodb/data/shard11

oplogsize = 100

LogPath =/data/mongodb/data/shard11.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard11.pid

BIND_IP = 10.212.74.43

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

Amongoshard02 (10.212.84.4) :

--$vi shard12.conf

Shardsvr = True

Replset = Shard1

Port = 27017

DBPath =/data/mongodb/data/shard12

oplogsize = 100

LogPath =/data/mongodb/data/shard12.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard12.pid

BIND_IP = 10.212.84.4

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi shard13.conf

Shardsvr = True

Replset = Shard1

Port = 27018

DBPath =/data/mongodb/data/shard13

oplogsize = 100

LogPath =/data/mongodb/data/shard13.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard13.pid

BIND_IP = 10.212.84.4

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi configsvr.conf

DBPath =/data/mongodb/data/config

LogPath =/data/mongodb/data/config.log

Logappend = True

BIND_IP = 10.212.84.4

Port = 27019

Fork = True

Amongoshard03 (10.212.98.23) :

--$vi shard21.conf

Shardsvr = True

Replset = Shard2

Port = 27017

DBPath =/data/mongodb/data/shard21

oplogsize = 100

LogPath =/data/mongodb/data/shard21.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard21.pid

BIND_IP = 10.212.98.23

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

Amongoshard04 (10.212.46.5) :

--$vi shard22.conf

Shardsvr = True

Replset = Shard2

Port = 27017

DBPath =/data/mongodb/data/shard22

oplogsize = 100

LogPath =/data/mongodb/data/shard22.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard22.pid

BIND_IP = 10.212.46.5

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi shard23.conf

Shardsvr = True

Replset = Shard2

Port = 27018

DBPath =/data/mongodb/data/shard23

oplogsize = 100

LogPath =/data/mongodb/data/shard23.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard23.pid

BIND_IP = 10.212.46.5

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi configsvr.conf

DBPath =/data/mongodb/data/config

LogPath =/data/mongodb/data/config.log

Logappend = True

BIND_IP = 10.212.46.5

Port = 27019

Fork = True

Amongoshard05 (10.212.70.21) :

--$vi shard14.conf

Shardsvr = True

Replset = Shard1

Port = 27017

DBPath =/data/mongodb/data/shard14

oplogsize = 100

LogPath =/data/mongodb/data/shard14.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard14.pid

BIND_IP = 10.212.70.21

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi shard24.conf

Shardsvr = True

Replset = Shard2

Port = 27018

DBPath =/data/mongodb/data/shard24

oplogsize = 100

LogPath =/data/mongodb/data/shard24.log

Logappend = True

Maxconns = 10000

Pidfilepath =/data/mongodb/data/shard24.pid

BIND_IP = 10.212.70.21

Rest = True

Fork = True

Noprealloc = True

Directoryperdb = True

--$vi configsvr.conf

DBPath =/data/mongodb/data/config

LogPath =/data/mongodb/data/config.log

Logappend = True

BIND_IP = 10.212.70.21

Port = 27019

Fork = True

3.mongodb Start-up:

Amongoshard01 (10.212.74.43) :

--$CD/data/mongodb/bin

--./mongod-shardsvr-f/data/mongodb/data/shard11.conf

--./mongos-configdb 10.212.84.4:27019,10.212.46.5:27019,10.212.70.21:27019-port 30000-chunksize 5-logpath/data/ Mongodb/data/mongos.log-logappend-fork

Amongoshard02 (10.212.84.4) :

--$CD/data/mongodb/bin

--./mongod-shardsvr-f/data/mongodb/data/shard12.conf

--./mongod-shardsvr-f/data/mongodb/data/shard13.conf

--./mongod-configsvr-f/data/mongodb/data/config.conf

--./mongos-configdb 10.212.84.4:27019,10.212.46.5:27019,10.212.70.21:27019-port 30000-chunksize 5-logpath/data/ Mongodb/data/mongos.log-logappend-fork

Amongoshard03 (10.212.98.23) :

--$CD/data/mongodb/bin

--./mongod-shardsvr-f/data/mongodb/data/shard21.conf

--./mongos-configdb 10.212.84.4:27019,10.212.46.5:27019,10.212.70.21:27019-port 30000-chunksize 5-logpath/data/ Mongodb/data/mongos.log-logappend-fork

Amongoshard04 (10.212.46.5) :

--$CD/data/mongodb/bin

--./mongod-shardsvr-f/data/mongodb/data/shard22.conf

--./mongod-shardsvr-f/data/mongodb/data/shard23.conf

--./mongod-configsvr-f/data/mongodb/data/config.conf

--./mongos-configdb 10.212.84.4:27019,10.212.46.5:27019,10.212.70.21:27019-port 30000-chunksize 5-logpath/data/ Mongodb/data/mongos.log-logappend-fork

Amongoshard05 (10.212.70.21) :

--$CD/data/mongodb/bin

--./mongod-shardsvr-f/data/mongodb/data/shard14.conf

--./mongod-shardsvr-f/data/mongodb/data/shard24.conf

--./mongod-configsvr-f/data/mongodb/data/config.conf

4. Initialize the replica set:

First replica set:

--#su-monogdb

--$CD/data/mongodb/bin

--$./mongo 10.212.74.43:27017

>use Admin

>config={_id: ' Shard1 ', members:[{_id:0,host: ' 10.212.74.43:27017 ', priority:2},{_id:1,host: ' 10.212.84.4:27017 ' , Priority:1},{_id:2,host: ' 10.212.84.4:27018 ', arbiteronly:true},{_id:3,host: ' 10.212.70.21:27017 ', priority:0}]};

>rs.initiate (config)

Shard1:primary>rs.status ()

Second set of replicas:

--#su-monogdb

--$CD/data/mongodb/bin

--$./mongo 10.212.74.43:27017

>use Admin

>config={_id: ' Shard2 ', members:[{_id:0,host: ' 10.212.98.23:27017 ', priority:2},{_id:1,host: ' 10.212.46.5:27017 ' , Priority:1},{_id:2,host: ' 10.212.46.5:27018 ', arbiteronly:true},{_id:3,host: ' 10.212.70.21:27018 ', priority:0}]};

>rs.initiate (config)

Shard2:primary>rs.status ()

5. Configure sharding:

Login to any MONGOs machine:

--#su-monogdb

--$CD/data/mongodb/bin

--$./mongo 10.212.74.43:30000/admin

Mongos>db.runcommand ({addshard: "shard1/10.212.74.43:27017,10.212.84.4 : 27017,10.212.84.4:27018,10.212.70.21:27017 ", Name:" Shard1 ", maxsize:2048000});

Mongos>db.runcommand ({addshard: "shard2/10.212.98.23:27017,10.212.46.5 : 27017,10.212.46.5:27018,10.212.70.21:27018 ", Name:" Shard2 ", maxsize:2048000});

Mongos>db.runcommand ({listshards:1})

6. Database Shards and collection shards:

--#su-monogdb

--$CD/data/mongodb/bin

--$./mongo 10.212.74.43:30000/admin

Mongos>db.runcommand ({enablesharding: "<dbname>"});

Mongos> Db.runcommand ({shardcollection: "<namespace>", Key: <shardkeypatternobject>});

Attention:

--The different collection in the database will be stored on different shard;

--A collection is still stored on the same shard, to make a single collection also Shard, but also need to do some operations on the collection alone;

--the collection system of shards automatically creates an index (which can also be created in advance by the user);

--the collection of a shard can have only one unique index on the Shard key, and other unique indexes are not allowed

Four, Test and summary

Use the for (i=1;i<=100000;i++) {Db.test.insert ({_id:i})} statement to test the environment and view the Shard condition.

Problems encountered during deployment and how to resolve them:

Issue 1: If you cannot view the error while viewing the data from the machine, execute the command on the Slave: Db.getmongo (). setslave0k (). You can check it again. Mongodb3.0 above version of Use Db.getmongo (). setslave0k () or Rs.slaveok () after exiting the login again, this error will still occur, and the file needs to be edited: #vi ~/.mongorc.js add Rs.slaveok ( );

Issue 2: After configuring the sharding should be aware that maxsize is worth specifying, if it is found that data can be fragmented later may not be the data storage beyond the size of the maxsize, after adjustment can be. The MaxSize value (in m) represents a disk size that can be used by a shard, and the excess data is not fragmented but is stored on primary sharding. Use the statement to specify or change its size:

The first type:

>db.runcommand ({addshard: "Shardip:port", maxsize:20480})

The second type:

>use Config

>db.shards.update ({_id: "[Shardname]"},{$set: {maxsize:20480}})

Original MongoDB Integrated Example Two

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.