Replication Cluster Configuration
1. Install the MongoDB databaseInstall MongoDB on both the master node and from the node
# RPM-IVH mongo-10gen-2.4.6-mongodb_1.x86_64.rpm mongo-10gen-server-2.4.6-mongodb_1.x86_64.rpm
2. Configuration Database# Mkdir-pv/mongodb/data
# Chown-r Mongod.mongod/mongodb/data
Modify configuration file
# vim/etc/mongod.conf
# Data Directory
Dbpath=/mongodb/data
# Configure cluster Name
Replset = Testrs0
3. Start Service# service Mongod Start
4. Configuring Replication ClustersPerform initialization on the master node:
> Rs.initiate ()
{
"Info2": "No configuration explicitly specified--making one",
"Me": "node2.chinasoft.com:27017",
"Info": "Config now saved locally." Should come online in about a minute. ",
"OK": 1
}
> Rs.status ()
{
"Set": "Testrs0",
"Date": Isodate ("2016-06-21t08:05:29z"),
"MyState": 1,
"Members": [
{
"_id": 0,
"Name": "node2.chinasoft.com:27017",
"Health": 1,
"State": 1,
"Statestr": "PRIMARY",
"Uptime": 925,
"Optime": Timestamp (1466496325, 1),
"Optimedate": Isodate ("2016-06-21t08:05:25z"),
' Self ': true
}
],
"OK": 1
}
Testrs0:primary> Db.ismaster ()
{
"SetName": "Testrs0",
"IsMaster": true,
"Secondary": false,
"Hosts": [
"NODE2.CHINASOFT.COM:27017"
],
"PRIMARY": "node2.chinasoft.com:27017",
"Me": "node2.chinasoft.com:27017",
"Maxbsonobjectsize": 16777216,
"Maxmessagesizebytes": 48000000,
"LocalTime": Isodate ("2016-06-21t08:07:27.528z"),
"OK": 1
}
Add from Node:
Testrs0:primary> Rs.add ("192.168.8.41:27017")
Can see that 8.41 became a node from
Testrs0:primary> Rs.status ()
{
"Set": "Testrs0",
"Date": Isodate ("2016-06-21t08:11:11z"),
"MyState": 1,
"Members": [
{
"_id": 0,
"Name": "node2.chinasoft.com:27017",
"Health": 1,
"State": 1,
"Statestr": "PRIMARY",
"Uptime": 1267,
"Optime": Timestamp (1466496539, 1),
"Optimedate": Isodate ("2016-06-21t08:08:59z"),
' Self ': true
},
{
"_id": 1,
"Name": "192.168.8.41:27017",
"Health": 1,
"State": 2,
"Statestr": "Secondary",
"Uptime": 132,
"Optime": Timestamp (1466496539, 1),
"Optimedate": Isodate ("2016-06-21t08:08:59z"),
"Lastheartbeat": Isodate ("2016-06-21t08:11:10z"),
"Lastheartbeatrecv": Isodate ("2016-06-21t08:11:10z"),
"Pingms": 1,
"Syncingto": "node2.chinasoft.com:27017"
}
],
"OK": 1
}
Inserting data on the primary node
Testrs0:primary> use TestDB
Switched to DB TestDB
Testrs0:primary> Show Collections
System.indexes
Testrs0:primary> Db.testcoll.insert ({name: ' Jack ', Age:22,gender: ' Male '})
Testrs0:primary> Db.testcoll.find ()
{"_id": ObjectId ("5768f78d898085b3a157eae4"), "name": "Jack", "Age": "Gender": "Male"}
Configure from Node:
Testrs0:secondary> Rs.slaveok ()
Testrs0:secondary> use TestDB
Switched to DB TestDB
Testrs0:secondary> Db.testcoll.find ()
{"_id": ObjectId ("5768f78d898085b3a157eae4"), "name": "Jack", "Age": "Gender": "Male"}
Fragmentation Configuration:
MongoDB Sharding Technology is MongoDB in order to solve the problem that a single MongoDB instance cannot cope with the increase of data volume and the increase of read and write requests. By using SHARDING,MONGODB to cut data into parts, The data distribution is stored on multiple shard. Sharding technology reduces individual shard processing requests and reduces storage capacity, and as the cluster expands, the overall throughput and capacity of the cluster are expanded.
The sharded cluster fragmentation cluster has the following components: Shards,query routers,config servers.
Shards: Used to store data, providing high availability and data consistency for this fragmented cluster. In a production environment, each shard is a replica set.
Query routers: or a MONGOs instance that interacts with the application, forwards the request to the shards at the back end, and then returns the result of the request to the client. A fragmented cluster can have multiple query router that is, mongos instance is used to apportion the client's request pressure. If you use multiple MONGOs instances, you can use proxies such as Haproxy or LVS to forward client requests to backend MONGOs, which must be configured as a client The affinity mode guarantees that requests from the same client are forwarded to the same mongos on the backend. Typically, the MONGOs instance is deployed to the application server.
config servers: metadata that is used to store fragmented clusters. These metadata contain the corresponding relationship between data sets of the whole cluster and back-end shards. Query router uses these metadata to locate client requests to the corresponding shards on the back end. The fragmented cluster of production environments has exactly 3 config servers. Config servers data is very important, if the config servers all hang up, the entire fragmented cluster will not be available. In a production environment, each config server must be placed on a different server, and the config server for each fragmented cluster cannot be shared and must be deployed separately.
Experiment topology:
configdb:192.168.8.41
# vim/etc/mongod.conf
Configsvr = True
App server:192.168.8.39
br> Delete Database
# rm-rf/mongodb/data
# vim/etc/mongod.conf
configdb = 192.168.8.41:27019
#将数据库路径注释掉
#dbpath =/mongodb/data
# mongos-f/etc/mongod.conf
Tue June 17:02:35.708 warning:running Wit H 1 config server should be do only for testing purposes and isn't recommended for production
about to fork child Process, waiting until server is ready for connections.
Forked process:36341
All output going to:/var/log/mongo/mongod.log
Child process started successfully, PA Rent exiting
sharding:192.168.8.42, 192.168.8.43
# mkdir-pv/mongodb/data
# chown-r m Ongod.mongod/mongodb/data
Modify configuration file
# vim/etc/mongod.conf
# Data Directory
Dbpath=/mongodb/data
Start Service
# service Mongod start
Profile server,
# MONGO--host 192.168.8.39 #添加节点 mongos> sh.addshard ("192.168.8.42:27017") {"shardadded": "shard0000", "OK": 1} mo Ngos> Sh.addshard ("192.168.8.43:27017") {"shardadded": "shard0001", "OK": 1} mongos> sh.status ()---Sharding St ATUs---sharding version: {"_id": 1, "Version": 3, "Mincompatibleversion": 3, "CurrentVersion": 4, "Clusterid": ObjectId ("576902ab487f8f95f4d66e55")} shards: {"_id": "shard0000", "host": "192.168.8.42:27017"} {"_id": "Shar
D0001 "," host ":" 192.168.8.43:27017 "} databases: {" _id ":" admin "," partitioned ": false," PRIMARY ":" Config "} Mongos> sh.enablesharding ("TestDB") {"OK": 1} mongos> sh.shardcollection ("Testdb.testcoll", {age:1,name:1}) {"C
Ollectionsharded ":" Testdb.testroll "," OK ": 1} mongos> use TestDB switched to DB TestDB mongos> show collections System.indexes Testroll # Inserts data mongos> for (i=1;i<=1000000;i++) Db.testcoll.insert ({name: ' User ' +i,age: (i%150), Address: ' #+i ' + ', Xuegang NorthRoad,shenzhen ', preferbooks:[' book ' +i, ' Hello World '})
Operation Fragmentation on 8.39
Add index
Mongos> db.testcoll.ensureIndex ({name:1}) mongos> sh.shardcollection ("Testdb.testcoll", {name:1}) mongos> Sh.status ()---Sharding status---Sharding version: {"_id": 1, "Version": 3, "Mincompatibleversion": 3, "Currentv Ersion ": 4," Clusterid ": ObjectId (" 576902ab487f8f95f4d66e55 ")} shards: {" _id ":" shard0000 "," host ":" 192.168.8.4 2:27017 "} {" _id ":" shard0001 "," host ":" 192.168.8.43:27017 "} databases: {" _id ":" admin "," partitioned ": Fals E, "PRIMARY": "config"} {"_id": "Test", "partitioned": false, "PRIMARY": "shard0000"} {"_id": "TestDB", "PA Rtitioned ': True, ' primary ': ' shard0000 '} Testdb.testcoll Shard key: {' name ': 1} chunks:shard0001 1 shard0000 4 { ' name ': {' $minKey ': 1}}-->> {' name ': ' user288741 '} on:shard0001 Timestamp (2, 0) {' name ': ' user288741 ' }-->> {"name": "user477486"} on:shard0000 Timestamp (2, 1) {"name": "user477486"}-->> {"name": "Us er66623 "} on:shard0000 TiMestamp (1, 2) {"Name": "user66623"}-->> {"name": "user854975"} on:shard0000 Timestamp (1, 3) {"Name": "U ser854975 "}-->> {" name ": {" $maxKey ": 1}} on:shard0000 Timestamp (1, 4) Testdb.testroll Shard key: {" Age " : 1, "name": 1} chunks:shard0000 1 {"Age": {"$minKey": 1}, "name": {"$minKey": 1}}-->> {"Age": {"$m Axkey ': 1}, ' name ': {' $maxKey ': 1}} on:shard0000 Timestamp (1, 0)