Mongodb high-availability architecture-ReplicaSet cluster practice

Source: Internet
Author: User
Tags node server
ReplicaSet uses n mongod nodes to build a high-availability solution with auto-failover and auto-recovery. Use ReplicaSet to implement read/write splitting. By connecting

ReplicaSet uses n mongod nodes to build a high-availability solution with auto-failover and auto-recovery. Use ReplicaSet to implement read/write splitting. By connecting

Replica Set uses n mongod nodes to build a high-availability solution with auto-failover and auto-recovery.

Use Replica Set to implement read/write splitting. By specifying slaveOk during connection or in the master database, Secondary shares the read pressure. Primary only performs write operations.

Secondary nodes in the Replica Set are unreadable by default.

Architecture diagram:

Run two mongod instances on each server:

Shard11 + shard12 + shard13 ----> form a replica set -- |

| -----> Sharding_cluster

Shard21 + shard22 + shard23 ----> form a replica set -- |

Shard Server: used to store actual data blocks. In the actual production environment, a role of shard server can be assumed by a relica set of several machines to prevent single point of failure on the host!

Config Server: stores the entire Cluster Metadata, including chunk information!

Route Server: a front-end Route. The client can access this Route and make the entire cluster look like a single database. The front-end applications can be used transparently.

1. install and configure the mongodb Environment

1. Install

2. Create users and groups

  • 3. Create a data directory

    Create the following directory on each server:

    4. Set hosts resolution for each node Server

    5. synchronous clock

    Ntpdate ntp. api. bz

    Writing to crontab task plan!

    The clock must be synchronized here, otherwise shrad cannot be synchronized!

    All nodes configured above are operated !!

    2. Configure relica sets

    1. Configure two shard

    You can place the preceding commands in a script to facilitate the startup!

    You can also write it to the configuration file and start it with the-f parameter!

    Change to configuration file mode:

    This can be started through mognod-f mongodb. conf!

    Here I put these commands into the script:

    Start script (only mongod is started here, And the config and mongos scripts are specifically started later ):


    PS: If you want to enable an HTTP port to provide the rest service, you can add the -- rest option in the mongod startup parameter!

    In this way, we can view the status through: 28021/_ replSet!

    We recommend that you use the configuration file and script file to start the production environment.

    3. initialize the replica set

    1. Configure the replica sets used by shard1

    If the following message is displayed, the operation is successful:

    You can see that the master node is changed to PRIMARY now!

    Next, let's look at other nodes:

    You can view the configuration of Replica Sets on all nodes:

    2. Configure the replica sets used by shard2

    Verification node:

    Now we have configured two replica sets!

    PS: during initialization, priority is not specified. The default id 0 is primary.

    Key data bit in the status:

    When you use rs. status () to view the status of replica sets,

    State: 1 indicates that the host can be read and written currently, and 2 indicates that the host cannot be read and written.

    Health: 1 indicates that the host is currently normal, 0: exception

    Note: This method can also be used for initial replica sets:

    Db. runCommand ({"replSetInitiate": {"_ id": "shard1", "members": [{"_ id": 0, "host": "192.168.8.30: 27021 "},{" _ id ": 1," host ":" 192.168.8.31: 27021 "},{" _ id ": 2," host ":" 192.168.8.32: 27021 "," shardOnly ": true}]})

    You can omit rs. initiate (config ).

    4. Configure three config servers

    Run on each server (the same for startup ):

    /Opt/mongodb/bin/mongod -- configsvr -- dbpath/data0/mongodb/db/config -- port 20000 -- logpath/data0/mongodb/logs/config. log -- logappend -- fork -- directoryperdb

    Script:

    Then, check whether the nodes are started:

    5. Configure mongs (enable routing)

    Run on servers 206 and 207 respectively (you can also start them on all nodes ):

    /Opt/mongodb/bin/mongos-configdb 192.168.8.30: 20000,192.168 .8.31: 20000,192.168 .8.32: 20000-port 30000-chunkSize 50-logpath/data0/mongodb/logs/mongos. log-logappend-fork

    Script:

    Note:

    1). the ip address and port in mongos are the ip address and port of the config service: 192.168.8.30: 20000,192.168 .8.31: 20000,192.168 .8.32: 20000.

    2). You must start config first (and after config starts normally, a config process exists) and then start mongos.

    6. Configure the shard Cluster

    1. connect to a route

    2. Join shards

    PS:

    The sharding operation must be performed in the admin database.

    If you only start routes for servers 206 and 207! Therefore, you do not need to add the 208 server!

    Optional parameter description:

    Name: used to specify the Name of each shard. If this parameter is not specified, the system will automatically allocate

    MaxSize: specifies the maximum disk space available for each shard. The unit is MegaBytes.

    3. List the added shards.


    PS: list the two shards (shard1 and shard2) that I have added. This indicates that the shards has been configured successfully !!

    If 206 is down, one of the other two nodes will become the master node, and mongos will automatically connect to the master node!

    7. add parts

    1. Activate database shards

    Db. runCommand ({enablesharding :" "});

    For example, db. runCommand ({enablesharding :""});

    Insert test data:

    Activate database:


    By executing the above commands, you can make the database span shard. If you do not execute this step, the database will only store it in one shard. Once the database shard is activated, different collections in the database will be stored on different shard, but a collection is still stored on the same shard. to split a collection, you need to perform some operations on the collection separately!

    2. Add an index

    You must add an index. Otherwise, you cannot partition collections!

    3. Collecton sharding

    To enable multipart storage for a single collection, you must specify a shard key for the collection and run the following command:

    Db. runCommand ({shardcollection: "", key :});

    PS:

    1) the operation must be switched to the admin database.

    2) You must create an index for the sharding collection system.

    3) The collection of shards can only have one unique index on the shard key. Other unique indexes are not allowed.

    4. view the shard status

    Some time slice has not changed!

    Insert more data:

    When a large amount of data is inserted again .. Automatic sharding processing !! So OK !!!

    8. Stop all service scripts

    9. Shard Management

    1. listshards: list all Shard

  • > Use admin
  • > Db. runCommand ({listshards: 1 })
  • 2. Remove shard

    After the removed parts are added to the same parts, the U.S. server will not be added. You can do this as follows:

    All you need to do is delete the information in the shards table and delete the removed shard key value! Add shard again

    For example, db. shards. remove ({"_ id": "shard2 "})

    3. View Sharding Information

    > PrintShardingStatus ()

    PRIMARY> db. system. replset. find ()

    PRIMARY> rs. isMaster ()

    ++ ++

    For introduction to the high availability of mongos access, please refer to the next decomposition !!!!

    This article is from the "->" blog. For more information, contact the author!

    , Virtual host

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.