MongoDB distributed sharding Cluster [4]

Source: Internet
Author: User
Tags mongodb sharding

MongoDB distributed sharding Cluster [4]
Sharding cluster configuration of MongoDB
Sharding cluster Introduction
This is a horizontally scalable mode, which is especially powerful when the data volume is large. In practice, large-scale applications generally use this architecture to build a monodb system.
To build a MongoDB Sharding Cluster, three roles are required:
Shard Server: mongod instance, used to store actual data blocks. In the actual production environment, a role of shard server can be assumed by one relica set of several machines to prevent single point of failure (spof) on the host.
Config Server: mongod instance, which stores the entire Cluster Metadata, including chunk information.
Route Server: A mongos instance, with frontend routing and access from the client. It makes the entire cluster look like a single database, and front-end applications can be used transparently.
Lab environment:
192.168.3.206 # IP Address
Mongod shard11: 27017 # port 27017 is a partition server 11
Mongod shard21: 27018 # port 27018 is part server 21
Mongod config01: 20000 # port 20000 is configured as server 01
Ipvs01: 30000 # port 30000 is the route server 01
192.168.3.210 # IP Address
Mongod shard12: 27017 # port 27017 is the shard server 12
Mongod shard22: 27018 # port 27018 is the partition server 22
Mongod config02: 20000 # port 20000 is the configuration server 02
Ipvs02: 30000 # port 30000 is the Routing Server 02
192.168.3.201 # IP Address
Mongod shard13: 27017 # port 27017 is part server 13
Mongod shard23: 27018 # port 27018 is part server 23
Mongod config03: 20000 # port 20000 is the configuration server 03
Ipvs03: 30000 # port 30000 is the Routing Server 03
Note:
Run a mongod instance (called mongod shard11, mongod shard12, and mongod shard13) on three machines respectively to organize replica 1905_01 as shard_1905_01 of the cluster.
Run a mongod instance (called mongod shard21, mongod shard22, and mongod shard23) on three machines respectively to organize replica 1905_02 as shard_1905_02 of the cluster.
Each machine runs a mongod instance as three config servers.
Each machine runs a mongs process for client connection.
1. Installation Method
For more information, see http://blog.csdn.net/liu334265659/article/details/41070323. After the installation is successful, do not use the startup command on the blog.
2. Create a data directory
The above operations on the 192.168.3.206 server are as follows:
Mkdir-p/data/mongodb/shard11
Mkdir-p/data/mongodb/shard21
Mkdir-p/data/mongodb/config
The above operations on the 192.168.3.210 server are as follows:
Mkdir-p/data/mongodb/shard12
Mkdir-p/data/mongodb/shard22
Mkdir-p/data/mongodb/config
The above operations on the 192.168.3.201 server are as follows:
Mkdir-p/data/mongodb/shard13
Mkdir-p/data/mongodb/shard23
Mkdir-p/data/mongodb/config
3. Configure replice sets ):
3.1 configure the replica sets used by shard_1905_01:
The above operations on the 192.168.3.206 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard11 -- logpath/data/logs/mongodb/shard11.log -- logappend -- port = 27017-replSet shard_1905_01-maxConns = 2000-oplogSize 100-fork
The above operations on the 192.168.3.210 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard12 -- logpath/data/logs/mongodb/shard12.log -- logappend -- port = 27017-replSet shard_1905_01-maxConns = 2000-oplogSize 100-fork
The above operations on the 192.168.3.201 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard13 -- logpath/data/logs/mongodb/shard13.log -- logappend -- port = 27017-replSet shard_1905_01-maxConns = 2000-oplogSize 100-fork
3.2 initialize the replica set
Connect one mongod with mongo and run the following command:
/Usr/local/mongodb/bin/mongo localhost: 27017
Config = {_ id: "shard_1905_01", members: [{_ id: 0, host: '2017. 168.3.206: 27017 '}, {_ id: 1, host: '2017. 168.3.210: 27017 '}, {_ id: 2, host: '2017. 168.3.201: 27017 '}]}
Rs. initiate (config );
3.3 configure replica sets used by shard_1905_02:
The above operations on the 192.168.3.206 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard21 -- logpath/data/logs/mongodb/shard21.log -- logappend -- port = 27018-replSet shard_1905_02-maxConns = 2000-oplogSize 100-fork
The above operations on the 192.168.3.210 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard22 -- logpath/data/logs/mongodb/shard22.log -- logappend -- port = 27018-replSet shard_1905_02-maxConns = 2000-oplogSize 100-fork
The above operations on the 192.168.3.201 server are as follows:
/Usr/local/mongodb/bin/mongod -- dbpath =/data/mongodb/shard23 -- logpath/data/logs/mongodb/shard23.log -- logappend -- port = 27018-replSet shard_1905_02-maxConns = 2000-oplogSize 100-fork
3.4 initialize the replica set
Connect one mongod with mongo and run the following command:
/Usr/local/mongodb/bin/mongo localhost: 27018
Config = {_ id: "shard_1905_02", members: [{_ id: 0, host: '2017. 168.3.206: 27018 '}, {_ id: 1, host: '2017. 168.3.210: 27018 '}, {_ id: 2, host: '2017. 168.3.201: 27018 '}]}
Rs. initiate (config );
# Now we have configured two replica sets, that is, we have prepared two shards.
4. Configure three config servers
The above operations on the 192.168.3.206 server are as follows:
/Usr/local/mongodb/bin/mongod -- configsvr -- dbpath =/data/mongodb/config/-- port = 20000 -- logpath =/data/logs/mongodb/config. log -- logappend-fork
The above operations on the 192.168.3.210 server are as follows:
/Usr/local/mongodb/bin/mongod -- configsvr -- dbpath =/data/mongodb/config/-- port = 20000 -- logpath =/data/logs/mongodb/config. log -- logappend-fork
The above operations on the 192.168.3.201 server are as follows:
/Usr/local/mongodb/bin/mongod -- configsvr -- dbpath =/data/mongodb/config/-- port = 20000 -- logpath =/data/logs/mongodb/config. log -- logappend-fork
5. Configure mongs
The above operations on the 192.168.3.206 server are as follows:
/Usr/local/mongodb/bin/mongos -- configdb 192.168.3.206: 20000,192.168 .3.210: 20000,192.168 .3.201: 20000 -- port = 30000 -- chunkSize 5 -- logpath =/data/logs/mongodb/mongos. log -- logappend-fork
The above operations on the 192.168.3.210 server are as follows:
/Usr/local/mongodb/bin/mongos -- configdb 192.168.3.206: 20000,192.168 .3.210: 20000,192.168 .3.201: 20000 -- port = 30000 -- chunkSize 5 -- logpath =/data/logs/mongodb/mongos. log -- logappend-fork
The above operations on the 192.168.3.201 server are as follows:
/Usr/local/mongodb/bin/mongos -- configdb 192.168.3.206: 20000,192.168 .3.210: 20000,192.168 .3.201: 20000 -- port = 30000 -- chunkSize 5 -- logpath =/data/logs/mongodb/mongos. log -- logappend-fork
6. Connect to one of the mongos processes and switch to the admin database for the following Configuration:
6.1 connect to mongs and switch to admin
/Usr/local/mongodb/bin/mongo 192.168.3.206: 30000/admin
Mongos> db # mongodb command
6.2. Join shards
# If shard is a single server, use commands such as> db. runCommand ({addshard: "<serverhostname> [: <port>]"}) to add it,
# If shard is a replica set, replicaSetName/<serverhostname> [: port] [, serverhostname2 [: port],…] The format is as follows:
Mongos> db. runCommand ({addshard: "shard_1905_01/192.168.3.206: 27017,192.168 .3.210: 27017,192.168 .3.201: 27017", name: "s1", maxsize: 20480 });
Mongos> db. runCommand ({addshard: "shard_1905_02/192.168.3.206: 27018,192.168 .3.210: 27018,192.168 .3.201: 27018", name: "s2", maxsize: 20480 });
Note: When the second shard is added, an error occurs: test database. Here, the mongo command is used to connect to the second replica set and db. the dropDatabase () command deletes the test database and then adds it
Name: used to specify the name of each shard. If this parameter is not specified, the system will automatically allocate
Maxsize: specifies the maximum disk space available for each shard. The unit is megabytes.
6.3 Listing shards
Mongos> db. runCommand ({listshards: 1}) # mongodb command
{
"Shards ":[
{
"_ Id": "s1 ",
"Host": "shard_1905_01/192.168.3.201: 27017,192.168 .3.206: 27017,192.168 .3.210: 27017"
},
{
"_ Id": "s2 ",
"Host": "shard_1905_02/192.168.3.201: 27018,192.168 .3.206: 27018,192.168 .3.210: 27018"
}
],
"OK": 1
}
# If the two shards you added are listed, the shards has been configured successfully.
6.4 activate database shards
Mongos> db. runCommand ({enablesharding: "<dbname> "})
# By executing the preceding commands, you can make the database span shard. If you do not execute this step, the database will only store it in one shard. Once the database shard is activated, different collections in the database will be stored on different shard,
# However, a collection is still stored on the same shard. to split a collection, you must perform some operations on the collection separately.
For example:
Mongos> db. runCommand ({enablesharding: "test"}) # mongodb command
{"OK": 1} # returned results
Check whether the quantity is effective:
Mongos> db. printShardingStatus () # mongodb command
--- Sharding Status ---
Sharding version :{
"_ Id": 1,
"Version": 4,
"MinCompatibleVersion": 4,
"CurrentVersion": 5,
"ClusterId": ObjectId ("53c76f7a9adee90a8b860eea ")
}
Shards:
{"_ Id": "s1", "host": "shard_1905_01/192.168.3.201: 27017,192.168 .3.206: 27017,192.168 .3.210: 27017 "}
{"_ Id": "s2", "host": "shard_1905_02/192.168.3.201: 27018,192.168 .3.206: 27018,192.168 .3.210: 27018 "}
Databases:
{"_ Id": "admin", "partitioned": false, "primary": "config "}
{"_ Id": "test", "partitioned": true, "primary": "s2 "}


Note:
Once a database is enabled, mongos places different data sets in the database on different shards. Unless the dataset is split (which will be set below), all data in a dataset will be placed on one partition.
6.5collecton:
Note:
# To enable multipart storage for a single collection, you must specify a shard key for collections, as shown below:
#> Db. runCommand ({shardcollection: "<namespace>", key: <shardkeypatternobject> })
# A. The multipart collection system automatically creates an index (you can also create an index in advance)
# B. The collection of shards can only have one unique index on the shard key. Other unique indexes are not allowed.
Example of multipart collection:
Mongos> db. runCommand ({shardcollection: "test. liu_user", key: {id: 1}) # mongodb command
{"Collectionsharded": "test. liu_user", "OK": 1} # returned results
Mongos> use test # mongodb command
# Simulate inserting 200003 pieces of test data into the liu_user collection:
Mongos> for (var I = 0; I <200003; I ++) db. liu_user.save ({id: I, value: "111111"}) # mongodb command
Mongos> db. liu_user.find (). count () # mongodb command
225366
Mongos> db. liu_user.stats () # mongodb command
{
"Sharded": true,
"SystemFlags": 1,
"UserFlags": 1,
"Ns": "test. liu_user ",
"Count": 225366,
"NumExtents": 14,
"Size": 25240992,
"StorageSize": 48979968,
"TotalIndexSize": 13662096,
"IndexSizes ":{
"_ Id _": 7342048,
"Id_1": 6320048
},
"AvgObjSize": 112,
"Nindexes": 2,
"Nchunks": 6,
"Shards ":{
"S1 ":{
"Ns": "test. liu_user ",
"Count": 25363,
"Size": 2840656,
"AvgObjSize": 112,
"StorageSize": 11182080,
"NumExtents": 6,
"Nindexes": 2,
"LastExtentSize": 8388608,
"PaddingFactor": 1,
"SystemFlags": 1,
"UserFlags": 1,
"TotalIndexSize": 1553440,
"IndexSizes ":{
"_ Id _": 833952,
"Id_1": 719488
},
"OK": 1
},
"S2 ":{
"Ns": "test. liu_user ",
"Count": 200003,
"Size": 22400336,
"AvgObjSize": 112,
"StorageSize": 37797888,
"NumExtents": 8,
"Nindexes": 2,
"LastExtentSize": 15290368,
"PaddingFactor": 1,
"SystemFlags": 1,
"UserFlags": 1,
"TotalIndexSize": 12108656,
"IndexSizes ":{
"_ Id _": 6508096,
"Id_1": 5600560
},
"OK": 1
}
},
"OK": 1
}
# If you see the above content, it indicates that the part is successfully sharded. What I think is amazing is why I inserted 200003 records and how it turned into 225366 records. You have to continue research...

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.