MongoDB High-availability mode deployment

Source: Internet
Author: User
Tags mongoclient mongodb sharding mongo shell

First prepare the machine, I am here in the company Cloud Platform created three DB Server,ip respectively is 10.199.144.84,10.199.144.89,10.199.144.90.

Install the latest stable version of MongoDB separately:

wget HTTPS://FASTDL.MONGODB.ORG/LINUX/MONGODB-LINUX-X86_64-2.4.12.TGZTAR-XZVF MONGODB-LINUX-X86_64-2.4.12.TGZMV Mongodb-linux-x86_64-2.4.12/usr/lib

Make a soft connection or follow the official procedure to add the MONGO shell to the environment variable:

Ln-s/usr/lib/mongodb-linux-x86_64-2.4.12/bin/mongo/usr/bin/mongoln-s/usr/lib/mongodb-linux-x86_64-2.4.12/bin/ Mongos/usr/bin/mongosln-s/usr/lib/mongodb-linux-x86_64-2.4.12/bin/mongod/usr/bin/mongod

Create a directory that stores data separately:

&& mkdir-p conf/data conf/log mongos/log shard{1..3}/data shard{1..3}/log

Configure the boot config server separately:

Mongod--configsvr--dbpath/data/mongodb/conf/data--port 27100--logpath/data/mongodb/conf/confdb.log--fork-- Directoryperdb

After you have ensured that the Config service is started, start the routing server (MONGOs):

MONGOs--configdb 10.199.144.84:27100,10.199.144.89:27100,10.199.144.90:27100--port 27000--logpath/data/mongodb/ Mongos/mongos.log--fork

Configure the start of each shard replica set separately, where the replica set name is called, respectively, shard1 shard2 and shard3 :

Mongod--shardsvr--replset shard1--port 27001--dbpath/data/mongodb/shard1/data--logpath/data/mongodb/shard1/log/ Shard1.log--directoryperdb--forkmongod--shardsvr--replset shard2--port 27002--dbpath/data/mongodb/shard2/data-- Logpath/data/mongodb/shard2/log/shard2.log--directoryperdb--forkmongod--shardsvr--replSet shard3--port 27003-- Dbpath/data/mongodb/shard3/data--logpath/data/mongodb/shard3/log/shard3.log--directoryperdb--fork

Next configure the replica set, assuming that the following schema is used, each physical machine has one master node, one replica node, and one quorum node:

Configuration Shard1 (login 84, no explicitly specified master node, will be selected to log on to the main node of the machine):

MONGO--port 27001use adminrs.initiate ({    ' Shard1', Members    : [        ' 10.199.144.84:27001  ' 10.199.144.89:27001' 10.199.144.90:27001True}]});      

Configuration Shard2 (Login 89):

MONGO--port 27001use adminrs.initiate ({    ' Shard2', Members    : [        ' 10.199.144.84:27002  ' 10.199.144.89:27002' 10.199.144.90:27002'}];      

Configuration Shard3 (login 90):

MONGO--port 27001use adminrs.initiate ({    ' Shard3', Members    : [        ' 10.199.144.84:27002  ' 10.199.144.89:27002' 10.199.144.90:27002'}];      

The following settings are routed to the Shard cluster configuration, a random landing machine, assuming 84:

' shard1/10.199.144.84:27001,10.199.144.89:27001,10.199.144.90:27001 '})' shard2/ 10.199.144.84:27002,10.199.144.89:27002,10.199.144.90:27002'})' shard3/ 10.199.144.84:27003,10.199.144.89:27003,10.199.144.90:27003'});     

To view the configured shard:

MONGO--port 27000use admindb.runcommand ({listshards:1});

Results:

{"Shards": [        {"_id":"Shard1",The host":"Shard1/10.199.144.84:27001,10.199.144.89:27001" }, {"_id "" Shard2 ", " Host ": < Span class= "Pl-pds" > "Shard2/10.199.144.89:27002,10.199.144.90:27002"}, {  "_id" :  "Host"  shard3/10.199.144.90:27003,10.199.144.84:27003< Span class= "Pl-pds" "}",  "Ok" : 1 } 

where the quorum (arbiter) node is not listed.

Test the Shard below:

' Dbtest '})' Dbtest.coll1', key: {id:1}}, usedbtest;  for (var i=0; I<10000' str_' + i});          

If the dbtest already exists, you need to make sure it has been id indexed to build!

After a period of time, the db.coll1.stats() explicit shard state is run:

{"Sharded":True"NS":"Dbtest.coll1",The Count": 10000, ....."Shards": {"Shard1": {"NS" "Dbtest.coll1"   "Count" : 0,  "Size" : 0, ...},  Shard2 ": {" Ns "",   "Count" : 10000,  "Size" : 559200, ...} ...} 

As you can see, the shards are already in effect, but the distribution is uneven, and all the data is in the Shard2. The selection strategy of the Shard key can be referenced in the official documentation. In version 2.4, the Hashed Shard key algorithm was used to ensure the uniform distribution of documents:

MONGO--port 27000use adminsh.shardcollection (' dbtest.coll1' hashed'});   

After using the hashed algorithm, the same test is done, and the inserted data is basically evenly distributed:

{"Sharded":True"NS":"Dbtest.coll1",The Count": 10000, ....."Shards": {"Shard1": {"NS":"Dbtest.coll1",The Count": 3285,The size": 183672,...},"Shard2": {"NS" "Dbtest.coll1"   "Count" : 3349,  "Size"  shard3< Span class= "Pl-pds" > ": {" Ns " Span class= "PL-C1" >:  "Dbtest.coll1",  "Count" : 3366,  "Size" : 188168, ...}}    

For more information, please refer to MongoDB sharding.

In the application, use the MongoClient Create DB connection:

Mongoclient. Connect (' mongodb://10.199.144.84:27000,10.199.144.89:27000,10.199.144.90:27000/dbtest?w=1function ( db) { ;});

MongoDB High-availability mode deployment

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.