Monodb Shard Cluster Deployment

Source: Internet
Author: User
Tags mongodb server mongodb version mongo shell database sharding

This document is based on MongoDB version 3.6.2:

Recommended use of the latest version

Https://www.mongodb.com/download-center#community

installation files

Cluster IP and port design scheme:

Service

192.168.141.201

192.168.141.202

192.168.141.203

Router

Mongos (17017)

Mongos (17017)

Config

Config Server1 (27017)

Config Server2 (27017)

Config Server3 (27017)

shard1-Main (37017)

shard2-Main (47017)

shard3-Main (57017)

Shard

shard2-from (47017)

Shard1-from (37017)

Shard1-from (37017)

shard3-from (57017)

shard3-from (57017)

shard2-from (47017)

Perform the following naming on each machine where MongoDB is deployed: (Create MONGO under three service paths)

Mkdir-p/home/mongo/{config,router,shard}
Mkdir-p/home/mongo/config/{data,logs}
Mkdir-p/home/mongo/router/logs
Mkdir-p/home/mongo/shard/{data,logs}
Mkdir-p/home/mongo/shard/data/{shard1,shard2,shard3}

A script file can be generated, mongodirs.sh, and then copied to all MONGO machines for execution:

#!/usr/bin/bash
Mkdir-p MONGO
Mkdir-p Mongo/{config,router,shard}
Mkdir-p Mongo/config/{data,logs}
Mkdir-p Mongo/router/logs
Mkdir-p Mongo/shard/{data,logs}
Mkdir-p Mongo/shard/data/{shard1,shard2,shard3}

Then execute it at the root of the MONGO database

./mongodirs.sh

Config service:

Note: The config service starts at least three nodes

Vi/home/mongo/config/config.config

Dbpath=/home/mongo/config/data
logpath=/Home/mongo/config/logs/config.log
bind_ip=0.0.0.0
port=27017
Logappend=true
Fork=true
Quiet=true
Journal=true
Configsvr=true
replset=configrs/192.168.141.201:27017,192.168.141.202:27017,192.168.141.203:27017


Start Config Service

Mongod--config/home/mongo/config/config.config

The possible errors are as follows:

About-to-fork child process, waiting until server was ready for connections.
Forked process:10632
Error:child process failed, exited with error number 14
To see additional information in this output, start without the "--fork" option.

Without permission, Sudo will start.

Error number 18:

Error number 100: port is occupied

Exited with error number 48

The reason is that the port is occupied

[* * * * * @centosvm config]$ sudo mongod--config config.config
[sudo] * * * * Password:
About-to-fork child process, waiting until server was ready for connections.
Forked process:10667
Child process started successfully, parent exiting

The correct startup command line is as follows:

Using the MONGO shell to connect to the already started Mongoconfig service, initialize the Config service

Mongo–port 27017

[* * * * * @centosvm config]$ mongo-port 27017
MongoDB Shell version v3.6.2
Connecting to:mongodb://127.0.0.1:27017/
MongoDB Server version:3.6.2
Welcome to the MongoDB shell.
For interactive help, type ' help '.
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the Support group
Http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:38:04.377+0800 I control [Initandlisten] * * warning:access CONTROL is not enabled for the database.
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Read and Write access to data and configuration is unrestricted.
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Warning:you is running this process as the root user, which is Not recommended.
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Warning:this server is bound to localhost.
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten] * * * Remote systems would be unable to connect to the this server.
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Start the server with--bind_ip <address> to specify which Ip
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Addresses it should serve responses from, or with--bind_ip_all To
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * bind to all interfaces. If This behavior is desired, start the
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * Server with--BIND_IP 127.0.0.1 to disable this warning.
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * WARNING:/sys/kernel/mm/transparent_hugepage/enabled is ' Always '.
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * We suggest setting it to ' never '
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * WARNING:/sys/kernel/mm/transparent_hugepage/defrag is ' Always '.
2018-01-26t14:38:04.377+0800 I CONTROL [initandlisten] * * We suggest setting it to ' never '
2018-01-26t14:38:04.377+0800 I CONTROL [Initandlisten]
2018-01-26t14:55:45.854+0800 E-[main] Error loading history file:FileOpenFailed:Unable to fopen () File/home/****/.dbs Hell:no such file or directory
MongoDB Enterprise >

According to the design of the cluster, configure the IP and port list of the cluster and specify which is the primary node and which are the slave nodes; The following command executes under the MongoDB Shell command line:

Rs.initiate ({_id: "Configrs", Configsvr:true,members:[{_id:1,host: "192.168.141.201:27017", Priority:2},{_id:2, Host: "192.168.141.202:27017"},{_id:3,host: "192.168.141.203:27017"}]})

Possible error return results:

MongoDB Enterprise > Rs.initiate ({_id: "Configrs", Configsvr:true,members:[{_id:1,host: "192.168.126.132:27017", Priority:2},{_id:2,host: "192.168.126.131:27017"},{_id:3,host: "192.168.126.130:27017"}]})
2018-01-26t15:01:17.200+0800 E QUERY [Thread1] syntaxerror:illegal character @ (Shell): 1:17

The possible cause is that double quotes are full-width characters

MongoDB Enterprise > Rs.initiate ({_id: "Configrs", Configsvr:true,members:[{_id:1,host: "192.168.126.132:27017", Priority:2},{_id:2,host: "192.168.126.131:27017"},{_id:3,host: "192.168.126.130:27017"}]})
{
"OK": 0,
"ErrMsg": "Replsetinitiate quorum check failed because not all proposed set members responded affirmatively:192.168.126. 131:27017 failed with no route to host, 192.168.126.130:27017 failed with no route to host ",
"Code": 74,
"codename": "Nodenotfound",
"$gleStats": {
"Lastoptime": Timestamp (0, 0),
"Electionid": ObjectId ("000000000000000000000000")
}
}

The possible reason is that the inter-node connectivity failure, this time to look at the Config.config file Bind_ip is not configured

Dbpath=/home/****/mongo/config/data
Logpath=/home/****/mongo/config/logs/config.log
bind_ip=0.0.0.0
port=27017
Logappend=true
Fork=true
Quiet=true
Journal=true
Configsvr=true
replset=configrs/192.168.126.130:27017,192.168.126.131:27017,192.168.126.132:27017

Example of the correct return result:

MongoDB Enterprise > Rs.initiate ({_id: "Configrs", Configsvr:true,members:[{_id:1,host: "192.168.126.132:27017", Priority:2},{_id:2,host: "192.168.126.131:27017"},{_id:3,host: "192.168.126.130:27017"}]})
{
"OK": 1,
"Operationtime": Timestamp (1516954993, 1),
"$gleStats": {
"Lastoptime": Timestamp (1516954993, 1),
"Electionid": ObjectId ("000000000000000000000000")
},
"$clusterTime": {
"Clustertime": Timestamp (1516954993, 1),
"Signature": {
"Hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="),
"KeyId": Numberlong (0)
}
}
}

Then take a look at the other two machines and see if the Config node group is unicom.

MongoDB Enterprise > Rs.status ()
{
"Set": "Configrs",
"Date": Isodate ("2018-01-26t08:30:51.551z"),
"MyState": 2,
"Term": Numberlong (1),
"Syncingto": "192.168.126.130:27017",
"Configsvr": true,
"Heartbeatintervalmillis": Numberlong (2000),
"Optimes": {
"Lastcommittedoptime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Readconcernmajorityoptime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Appliedoptime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Durableoptime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
}
},
"Members": [
{
"_id": 1,
"Name": "192.168.126.132:27017",
"Health": 1,
"State": 1,
"Statestr": "PRIMARY",
"Uptime": 36,
"Optime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Optimedurable": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Optimedate": Isodate ("2018-01-26t08:30:44z"),
"Optimedurabledate": Isodate ("2018-01-26t08:30:44z"),
"Lastheartbeat": Isodate ("2018-01-26t08:30:49.985z"),
"Lastheartbeatrecv": Isodate ("2018-01-26t08:30:50.967z"),
"Pingms": Numberlong (0),
"Electiontime": Timestamp (1516955424, 1),
"Electiondate": Isodate ("2018-01-26t08:30:24z"),
"ConfigVersion": 1
},
{
"_id": 2,
"Name": "192.168.126.131:27017",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 545,
"Optime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Optimedate": Isodate ("2018-01-26t08:30:44z"),
"Syncingto": "192.168.126.130:27017",
"ConfigVersion": 1,
"Self": true
},
{
"_id": 3,
"Name": "192.168.126.130:27017",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 36,
"Optime": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Optimedurable": {
"TS": Timestamp (1516955444, 1),
"T": Numberlong (1)
},
"Optimedate": Isodate ("2018-01-26t08:30:44z"),
"Optimedurabledate": Isodate ("2018-01-26t08:30:44z"),
"Lastheartbeat": Isodate ("2018-01-26t08:30:49.991z"),
"Lastheartbeatrecv": Isodate ("2018-01-26t08:30:50.986z"),
"Pingms": Numberlong (1),
"Syncingto": "192.168.126.132:27017",
"ConfigVersion": 1
}
],
"OK": 1,
"Operationtime": Timestamp (1516955444, 1),
"$gleStats": {
"Lastoptime": Timestamp (0, 0),
"Electionid": ObjectId ("000000000000000000000000")
},
"$clusterTime": {
"Clustertime": Timestamp (1516955444, 1),
"Signature": {
"Hash": Bindata (0, "aaaaaaaaaaaaaaaaaaaaaaaaaaa="),
"KeyId": Numberlong (0)
}
}
}

MONGOs Service:

Edit MONGOs configuration file Mongo/router/router.config

configdb=configrs/192.168.126.132:27017,192.168.126.131:27017,192.168.126.130:27017
bind_ip=0.0.0.0
port=17017
Fork=true
Logpath=/home/****/mongo/router/logs/mongos.log

To start MONGOs, you may have to wait 5-10 seconds:

[* * * * * @centosvm router]$ sudo mongos--config router.config
About-to-fork child process, waiting until server was ready for connections.
Forked process:14553
Child process started successfully, parent exiting

MONGOs at least two or more, MONGOs's ip+ port is the MONGO service address configured by the client.

Copy the Router.config file to the same directory on another MONGO machine and start with the command above.

Caused by:: Shardnotfound:database cloudconf not found due to No shards found

Shard Service:

Under each server, we have established a good path: Mongo/shard/data/{shard1,shard2,shard3}

Write 3 shard configuration files in the Shard directory.

Vi/home/mongo/shard/shard1.config

dbpath=/Home/mongo/shard/data/shard1
logpath=/Home/mongo/shard/logs/shard1.log
port=37017
bind_ip=0.0.0.0
Logappend=true
Fork=true
Quiet=true
Journal=true
Shardsvr=true
replset= shard1rs/192.168.141.201:37017,192.168.141.202:37017,192.168.141.203:37017

Vi/home/mongo/shard/shard2.config

Dbpath=/home/mongo/shard/data/shard2
logpath=/Home/mongo/shard/logs/shard2.log
port=47017
bind_ip=0.0.0.0
Logappend=true
Fork=true
Quiet=true
Journal=true
Shardsvr =true
replset= shard2rs/192.168.141.201:47017,192.168.141.202:47017,192.168.141.203:47017

Configuration Shard3.config, Port 57017

Note: Change the corresponding IP, port, cluster name

replset= shard1rs/192.168.141.201:37017,192.168.141.202:37017,192.168.141.203:37017

Start 3 Shard Services, start with our pre-designed main slice and start the remaining shard

Mongod-f/home/mongo/shard/shard1.config

Mongod-f/home/mongo/shard/shard2.config


Initialize the Shard service, enter any MONGODB server, configure each shard replica set

Mongo-port 37017

rs.initiate({_id:"shard1RS",members:[{_id:1,host:"192.168.141.201:37017",priority:2},{_id:2,host:"192.168.141.202:37017"},{_id:3,host:"192.168.141.203:37017"}]})

Mongo-port 47017

RS. Initiate ({_ID:"shard2rs", members:[{_ID:1,host:"192.168.141.202:47017", Priority: 2},{_ID:2,host:"192.168.141.201:47017"},{_ID:3,host:"192.168.141.203:47017" }]})

Complete port 57017 configuration as above

Configure shards to add a master slice to a cluster
Mongo-port 17017

>use AD Min

>db.runcommand ({ "Addshard": " shard1rs/192.168.141.201:37017 " ," maxsize ": 1024})

>db.runcommand ({ "Addshard": "shard2rs/192.168.141.202:47017"  , "maxsize": 1024})

>db "Addshard": "shard3rs/192.168.141.203:57017"  , "maxsize": 1024})

Use, when used, need to turn on the database sharding function, and the database under the table of the field to specify the Shard algorithm

>use Admin

--open shards on library hdctest

>db.runcommand ({"enablesharding":"Hdctest"})

--hdctest The table person under the library to configure the hash library algorithm by field ID

>db.runcommand ({"shardcollection":"Hdctest.person","key": {_id:' hashed '}})

Note: When you log in to view data from a library, you may report:

Not Master and slaveok= false,code=13435

Perform:

Db.getmongo (). Setslaveok ()

Monodb Shard Cluster Deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.