Mongodb sharding Shard

Source: Internet
Author: User
Tags mongodb sharding

Mongodb sharding

Introductionsharding in MongoDB Shard Build:

Shards (recommended with replica), Query routers, Config Servers.

Shards Store the data. To provide high availability and dataconsistency, in a production sharded cluster, each shard is a replica set [1]. For more information on replica sets, see replicasets.

Query Routers , or MONGOs instances, interface with clientapplications and direct operations to the appropriate shard or shards. Thequery router processes and targets operations to shards and then returnsresults to the clients. A sharded cluster can contain more than one queryrouter to divide the client request load. A client sends requests to one queryrouter. Most sharded clusters has many query routers.

Config Servers Store thecluster ' s metadata. This data contains a mapping of the cluster ' s data set tothe shards. The query router uses this metadata to target operations tospecific shards. Production sharded clusters has exactly 3 config servers.

[1]

for development and testing purposes only, each Shard can is a single mongod instead of a replica set. Do not deploy production clusters without 3 config servers.

Shard Type: Range and hash 

Comparison summary: Linear with range hash hash.

The details will not be copied from the official website.

Environment Preparation: The Shard structure is distributed as follows:

Shardserver 1:27,020

Shardserver 2:27,021

Shardserver 3:27,022

Shardserver 4:27,023

configserver:27100

routeprocess:40000

Shardserver 5:27,024 (Simulated new service node)

Step one: Start Shard Server

Mongod--port 27020--dbpath=f:\dingsai\mongodb\shard\rs1\data--logpath=f:\dingsai\mongodb\shard\rs1\logs\ Mongodb.log--logappend

Mongod--port 27021--dbpath=f:\dingsai\mongodb\shard\rs2\data--logpath=f:\dingsai\mongodb\shard\rs2\logs\ Mongodb.log--logappend

Mongod--port 27022--dbpath=f:\dingsai\mongodb\shard\rs3\data--logpath=f:\dingsai\mongodb\shard\rs3\logs\ Mongodb.log--logappend

Mongod--port 27023--dbpath=f:\dingsai\mongodb\shard\rs4\data--logpath=f:\dingsai\mongodb\shard\rs4\logs\ Mongodb.log--logappend

Step Two: Start Configserver

Mongod--port 27100--dbpath=f:\dingsai\mongodb\shard\config\data--logpath=f:\dingsai\mongodb\shard\config\logs\ Mongodb.log--logappend

Note: Here we can boot up like a normal mongodb service, without adding-SHARDSVR and Configsvr parameters. Because these two parameters are used to change the boot port, we specify the port on our own.

Step three: Start routeprocess

MONGOs --port 40000--configdb localhost:27100--logpath=f:\dingsai\mongodb\shard\routeprocess\logs\route.log--chunkSize 500

MONGOs start parameter, chunksize this item is used to specify the size of the chunk, in megabytes, the default size is 200MB.

Step Four: Configure Sharding

Next, we use the MongoDB shell to log in to MONGOsand add the Shard node


Bin/mongoadmin--port 40000

Mongodbshell version:2.0.7

Connectingto:127.0.0.1:40000/admin


Use admin

Mongos>db.runcommand ({addshard: "localhost:27020"})

Mongos>db.runcommand ({addshard: "localhost:27021"})

Mongos>db.runcommand ({addshard: "localhost:27022"})

Mongos>db.runcommand ({addshard: "localhost:27023"})

-- can add node name db.runcommand ({addshard: "192.168.253.212:27017", "name": "XXX Server"});

--Enable sharding on database ding

Mongos>db.runcommand ({enablesharding: "ding"})

-- Ding the C2 table below the database to enable Sharding--Sharding by the _ID column

Mongos>db.runcommand ({shardcollection: "ding.c2", key: {_id: 1}}

Step Five: Test Insert data:

Mongoadmin--port 40000

--Switch database

Useding

# #插入5万行

Mongos>for (var i=0;i<500;i++) {Db.c2.insert ({name: ' Dingsai ' +i,seq:i})}

# #查看记录

Mongos> Db.c2.find (). Count ()

-- view shard status

db.printshardingstatus ()

or Sh.status ();

Mongos> Db.printshardingstatus ()

---sharding Status---

Sharding version: {

"_id": 1,

"Version": 4,

"Mincompatibleversion": 4,

"CurrentVersion": 5,

"Clusterid": ObjectId ("54dffdc6d33e0feb326a8f90")

}

Shards:

{"_id": "shard0000", "host": "Localhost:27020"}

{"_id": "shard0001", "host": "localhost:27021"}

{"_id": "shard0002", "host": "Localhost:27022"}

{"_id": "shard0003", "host": "Localhost:27023"}

Databases

{"_id": "Ding", "partitioned": true, "PRIMARY": "shard0001"}

Ding.c2

Shard key: {"_id": 1}

Chunks

shard0000 1

shard0001 1

SHARD0002 1

{"_id": {"$minKey": 1}}-->> {"_id": ObjectId ("54e009d6752aea9c8fc3d25a")} on:shard0000 Timestamp (2 , 0)

{"_id": ObjectId ("54e009d6752aea9c8fc3d25a")}-->> {"_id": ObjectId ("54e00a76752aea9c8fc3e684")} on: shard0001 Timestamp (3, 1)

{"_id": ObjectId ("54e00a76752aea9c8fc3e684")}-->> {"_id": {"$maxKey": 1}} on:shard0002 Timestamp (3 , 0)

}

Data is stored on three nodes

New Shard node Open service:

Bin/mongod--port 27024--dbpath=f:\dingsai\mongodb\shard\rs5\data--logpath=f:\dingsai\mongodb\shard\rs5\logs\ Mongodb.log--logappend


Login routeprocess:

Mongoadmin--port 40000

Useadmin

Added 27024

Mongos>db.runcommand ({addshard: "localhost:27024"});

Delete a shard node

When deleting a sharding node, it takes a certain amount of time to execute the DELETE statement multiple times to see the status in the current deletion

Delete 27021-shard0001 Shard Start Delete

Mongos> Db.runcommand ({removeshard: "shard0001"});

{

"MSG": "Drainingstarted successfully",

"State": "started",

"Shard": "shard0001",

"Note": "You need Todrop or moveprimary these databases",

"Dbstomove": [

"Ding"

],

"OK": 1

}

View current delete status in delete

Mongos>db.runcommand ({removeshard: "shard0001"});

{

"MSG": "Draining ongoing",

"State": "ongoing",

"Remaining": {

"Chunks": Numberlong (0),

"DBS": Numberlong (1)

},

"Note": "You need Todrop or moveprimary these databases",

"Dbstomove": [

"Ding"

],

"OK": 1

}

View current Delete status Delete complete

Mongos>db.runcommand ({removeshard: "shard0001"});

{

"OK": 0,

"ErrMsg": "Removeshardmay only is run against the admin database."

"Code": 13

}

Check the parding status again

Ongos>db.printshardingstatus ()

--sharding Status---

Sharding version: {

"_id": 1,

"Version": 4,

"Mincompatibleversion": 4,

"CurrentVersion": 5,

"Clusterid": ObjectId ("54dffdc6d33e0feb326a8f90")

Shards:

{"_id": "shard0000", "host": "Localhost:27020"}

{"_id": "shard0001", "host": "localhost:27021", "draining":True }

{"_id": "shard0002", "host": "Localhost:27022"}

{"_id": "shard0003", "host": "Localhost:27023"}

{"_id": "shard0004", "host": "localhost:27024"}

Databases

{"_id": "Ding", "partitioned": true, "PRIMARY": "shard0001"}

Ding.c2

Shard key: {"_id": 1}

Chunks

shard0000 1

Shard0003 1

SHARD0002 1

{"_id": {"$minKey": 1}}-->> {"_id": ObjectId ("54e009d6752aea9c8fc3d25a")} on:shard0000 Timestamp (2 , 0)

{"_id": ObjectId ("54e009d6752aea9c8fc3d25a")}-->> {"_id": ObjectId ("54e00a76752aea9c8fc3e684")} on: shard0003 Timestamp (4, 0)

{"_id": ObjectId ("54e00a76752aea9c8fc3e684")}-->> {"_id": {"$maxKey": 1}} on:shard0002 Timestamp (3 , 0)

Ongos>

Found: shard0001 machine status is draining:true

The data has also been moved from shard0001 to other nodes.

Node corruption query:

1. Close the 27002-port node

2. View corrupted node information: Mongos>use Config

Switchedto DB Config

Mongos>db. MONGOs. Find ()

{"_id": "hpo-pc:40000", "ping": Isodate ("2015-02-15t03:36:40.670z"), "Up": 5762, "Waiting": false, "mongoversion": "2.6 .6 "}

Mongos>

3. Normal use after restarting 27002

Step Six: Data backup

Export the ding database to the Ding folder

Mongodump-db ding-out F:\DingSai\Mongodb\bin\ding\-H localhost:40000

Summarize:

When to use sharding

1. Copy all write operations to the master node

2. Deferred sensitive data is queried at the master node

3. Single replica set limit of 12 nodes

4. There is not enough memory when the volume of the request is large.

5. Insufficient Local Disk

6. The vertical expansion of the price is expensive

The above is only a simple implementation, the official website recommends using the replica set (Replica set) as a shard node. Solve the system single point problem.

We'll have time to do it sometime.

Problem:

Windows32 bit has 2G file limit, use the time note.

IMPORTANT

Ittakes time and resources to deploy sharding. If your system have already reachedor exceeded its capacity, it'll be difficult to deploy sharding withoutimpacting your Application.

Asa result, if you think you'll need to partition your database on the Future,do not wait until your system was over CAPA City to enable sharding.

Mongodb sharding Shard

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.