MongoDB high-availability replica set shard cluster build

Source: Internet
Author: User

1 Logical architecture

1.1 Logical architecture Diagram

1.2 Component Description

MONGOs (query routers): Query the route, responsible for the client connection, and assign the task to shards, and then collect the results. A cluster can have multiple query routers (replica sets) to offload client requests (load balancing).

Second, config server: Configure the servers. The metadata for the cluster is saved (for example, on which shards the data is placed), and query router determines which shards to assign the task to by using the configuration information in config server. Starting with version 3.2, config servers can be made into replica sets.

Three, Shards: Shard, namely data node, store data and perform computation. In order to ensure high availability and data consistency, shards in the production environment should be made replicasets (to prevent loss of data).

2 Server Planning

2.1 IP and Port planning

2.2 Linux Directory Planning

3 Cluster Construction 3.1 preparation work

1, according to the server system version, to download the latest version of MongoDB (3.2.x),: https://www.mongodb.com/download-center?jmp=nav#community

2, according to 2.2 section of the directory planning, in 6 machines in any one of the platform to build the corresponding directory, and download good MongoDB extracted to the/DATA01/project name/mongodb/;

3. Execute the following command to create the keyfile:

A) OpenSSL rand-base64 741 >/data01/project name/mongodb/keyfile/keyfile

b) chmod 300/data01/project name/mongodb/keyfile/keyfile

4, according to the 2.1 section of the Port planning, on 6 machines in turn execute the following instructions to open the appropriate port:

Vi/etc/sysconfig/iptables

-A input-m state--state new-m tcp-p TCP--dport 17017-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 27017-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 37017-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 47017-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 57017-j ACCEPT

3.2 Parameter configuration 3.2.1 Configserver configuration

Create a new configsvr.conf file in section 2.2, with the following file contents:

DBPath =/data01/Project Name/mongodb/data/configsvr

Configsvr = True

Port = 27017

LogPath =/data01/Project Name/mongodb/logs/configsvr.log

Logappend = True

Fork = True

Replset=configrs

KeyFile =/data01/Project Name/mongodb/keyfile/keyfile

3.2.2 Routeserver Configuration

Create a new mongos.conf file in section 2.2, with the following file contents:

ConfigDB =c1:27017,c2:27017,c3:27017

Port = 17017

ChunkSize = 5

LogPath =/data01/Project Name/mongodb/logs/mongos.log

Logappend = True

Fork = True

KeyFile =/data01/Project Name/mongodb/keyfile/keyfile

3.2.3 Shard Configuration

Shard1.conf, shard2.conf, shard3.conf files in the new section 2.2, the file contents are as follows:

shard1.conf :

DBPath =/data01/Project name/mongodb/data/shard1

Shardsvr = True

Replset = Shard1

Port = 37017

oplogsize = 100

LogPath =/data01/Project name/mongodb/logs/shard1.log

Logappend = True

Fork = True

KeyFile =/data01/Project Name/mongodb/keyfile/keyfile

shard2.conf :

DBPath =/data01/Project name/mongodb/data/shard2

Shardsvr = True

Replset = Shard2

Port = 47017

oplogsize = 100

LogPath =/data01/Project name/mongodb/logs/shard2.log

Logappend = True

Fork = True

KeyFile =/data01/Project Name/mongodb/keyfile/keyfile

shard3.conf :

DBPath =/data01/Project name/mongodb/data/shard3

Shardsvr = True

Replset = Shard3

Port = 57017

oplogsize = 100

LogPath =/data01/Project name/mongodb/logs/shard3.log

Logappend = True

Fork = True

KeyFile =/data01/Project Name/mongodb/keyfile/keyfile

Here, all the configuration files have been built, and the entire/DATA01/project name/mongodb directory is copied to the remaining 5 machines through the SCP directive.

3.2.4 Creating a user

Run the following command on C1, C2, C3, respectively:

/DATA01/Project name/mongodb/bin/mongod-f/data01/project name/mongodb/config/configsvr.conf

Run on C1:/data01/project name/mongodb/bin/mongo--port 27017

Configuring config server for replica sets :

>use Admin

>rs.initiate ({_id: "Configrs", Configsvr:true,members: [{_id:0, Host: "c1:27017"},{_id:1, Host: "c2:27017"},{_i D:2, Host: "c3:27017"}]})

>rs.status ()

Create user

    1. Start Routeserver. Run the following command on C1, C2, C3, respectively:

/DATA01/Project name/mongodb/bin/mongos-f/data01/project name/mongodb/config/mongos.conf

    1. Execute on C1:

/DATA01/Project name/mongodb/bin/mongo--port 17017

mongos> Use admin

Mongos>db.createuser ({User: "xxx", pwd: "XXX", roles: [{role: "root", DB: "Admin"}]})

Mongos>db.auth ("xxx", "xxx")

Mongos>exit

3.2.5 Configuration Shard

Use the following command to start the Shard1, Shard2, and Shard3 on S1, S2, and S3:

Note: Start S1 in turn first the Shard1 , S2 the Shard2 , S3 the Shard3 , and then start the rest.

/DATA01/Project name/mongodb/bin/mongod-f/data01/project name/mongodb/config/shard1.conf

/DATA01/Project name/mongodb/bin/mongod-f/data01/project name/mongodb/config/shard2.conf

/DATA01/Project name/mongodb/bin/mongod-f/data01/project name/mongodb/config/shard3.conf

To see if it starts normally: NETSTAT-LNPT

After starting any machine connected to Shard1, SHARD2, shard3 to configure each shard as replica sets, the following are the specific configuration procedures:

Shard1 :

/DATA01/Project name/mongodb/bin/mongo--port 37017

>use Admin

>config = {"_id": "Shard1", Members: [{"_id": 0, "host": "s1:37017"}, {"_id": 1, "host": "s2:37017"}, {"_id": 2, "host": "S3:37017"}]}

>rs.initiate (config)

>exit

Shard2 :

/DATA01/Project name/mongodb/bin/mongo--port 47017

>use Admin

>config = {"_id": "Shard2", Members: [{"_id": 0, "host": "s1:47017"}, {"_id": 1, "host": "s2:47017"}, {"_id": 2, "host": "S3:47017"}]}

>rs.initiate (config)

>exit

Shard3 :

/DATA01/Project name/mongodb/bin/mongo--port 57017

>use Admin

>config = {"_id": "Shard3", Members: [{"_id": 0, "host": "s1:57017"}, {"_id": 1, "host": "s2:57017"}, {"_id": 2, "host": "S3:57017"}]}

>rs.initiate (config)

>exit

3.2.6 Configuring Shards

Configuring shards can only be done on a single machine, where you choose to execute on C1:

/DATA01/Project name/mongodb/bin/mongo--port 17017

mongos> Use admin

Mongos>db.auth ("xxx", "xxx") (user created in 3.2.4)

Mongos>db.runcommand ({addshard: "shard1/s1:37017,s2:37017,s3:37017", Name: "Shard1", maxsize:20480})

Mongos>db.runcommand ({addshard: "shard2/s1:47017,s2:47017,s3:47017", Name: "Shard2", maxsize:20480})

Mongos>db.runcommand ({addshard: "shard3/s1:57017,s2:57017,s3:57017", Name: "Shard3", maxsize:20480})

Verify the Shard. Continue execution at C1:

Db.runcommand ({listshards:1})

Activates the Shard configuration. Use

Sh.enablesharding ("library name");

Add a library and shard it.

Use

Sh.shardcollection ("library name. Collection name", {"_id": "Hashed"});

Create the corresponding table (collection) and hash The Shard.

Switch to a new library using the use library name

Use

Db.createuser ({User: "xxx", pwd: "xxx", Roles:[{role: "Dbowner", DB: "Library name"}]});

Create the corresponding user.

Verifying routes

1, use the name of the library (the new library above);

2. Insert a batch of test data: for (Var i=0;i<10;i++) {db. collection name. Insert ({name: "Jeff" +i});

3. Verify: DB. Set name. Stats ()

MongoDB high-availability replica set shard cluster build

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.