I. Index 1, index operation 1.1 inserting data
> use testdb
switched to db testdb
> for (i = 1; i <= 10000; i ++) db.students.insert ({name: "student" + i, age: (i% 120), address: "# 85 Wenhua Road, Zhengzhou, China"} )
> db.students.find (). count ()
10000
1.2 Create Index
Build an ascending index in the name field:
> db.students.ensureIndex ({name: 1})
{
"createdCollectionAutomatically": false,
"numIndexesBefore": 1,
"numIndexesAfter": 2,
"ok": 1
}
View the index:
> db.students.getIndexes ()
[
{
"v": 1,
"key": {
"_id": 1
},
"name": "_id_",
"ns": "testdb.students"
},
{
"v": 1,
"key": {
"name": 1
},
"name": "name_1",
"ns": "testdb.students"
}
]
1.3 Delete Index
> db.students.dropIndex ("name_1")
{"nIndexesWas": 2, "ok": 1}
> db.students.getIndexes ()
[
{
"v": 1,
"key": {
"_id": 1
},
"name": "_id_",
"ns": "testdb.students"
}
]
>
1.4 Create unique key index
> db.students.ensureIndex ({name: 1}, {unique: true})
> db.students.getIndexes ()
{
"v": 1,
"unique": true,
"key": {
"name": 1
},
"name": "name_1",
"ns": "testdb.students"
There are constraints on inserting the same value:
> db.students.insert ({name: "student20", age: 20})
WriteResult ({
"nInserted": 0,
"writeError": {
"code": 11000,
"errmsg": "E11000 duplicate key error index: testdb.students. $ name_1 dup key: {: \" student20 \ "}"
}
})
1.5 View the detailed execution process of the find statement
> db.students.find ({name: "student5000"}). explain ("executionStats")
{
"queryPlanner": {
"plannerVersion": 1,
"namespace": "testdb.students",
"indexFilterSet": false,
"parsedQuery": {
"name": {
"$ eq": "student5000"
}
},
"winningPlan": {
"stage": "FETCH",
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"name": 1
},
"indexName": "name_1",
"isMultiKey": false,
"direction": "forward",
"indexBounds": {
"name": [
"[\" student5000 \ ", \" student5000 \ "]"
]
}
}
},
"rejectedPlans": []
},
"serverInfo": {
"host": "master1.com",
"port": 27017,
"version": "3.0.0",
"gitVersion": "a841fd6394365954886924a35076691b4d149168"
},
"ok": 1
}
View the execution process of records greater than 5000
db.students.find ({name: {$ gt: "student5000"}}). explain ("executionStats")
Find 150,000 records with a name greater than 80,000 without indexing and comparison.
> for (i = 1; i <= 150000; i ++) db.test.insert ({name: "student" + i, age: (i% 120), address: "# 85 Wenhua Road, Zhengzhou, China"} )
Find:
db.test.find ({name: {$ gt: "student80000"}}). explain ("executionStats")
In contrast, the left is a full table lookup, and the right is a search after adding an index
Second, MongoDB replication set 1, mongod replication set configuration 1.1 Miscellaneous
The master node saves the data modification operation to the oplog, and the slave node copies it to the local and applies it through the oplog. oplog is generally stored in the local database
> show dbs
local 0.078GB
testdb 0.078GB
> use local
switched to db local
> show collections
startup_log
system.indexes
Relevant files will only be generated if the replica set is started
1.2 Prepare three nodes
master1 (master node), 2, 3
1.3 install MongoDB
master2,3 install MongoDB
[[email protected] mongodb-3.0.0] # ls
mongodb-org-server-3.0.0-1.el7.x86_64.rpm
mongodb-org-shell-3.0.0-1.el7.x86_64.rpm
mongodb-org-tools-3.0.0-1.el7.x86_64.rpm
[[email protected] mongodb-3.0.0] # yum install * .rpm
master2 configuration:
[[email protected] ~] # mkdir -pv / mongodb / data
[[email protected] ~] # chown -R mongod.mongod / mongodb /
Copy the configuration from master1 to master2 and modify
[[email protected] ~] # scp /etc/mongod.conf [email protected]: / etc /
[[email protected] ~] # scp /etc/mongod.conf [email protected]: / etc /
Start the service:
[[email protected] ~] # systemctl start mongod.service
master3 is configured as above, and starts the mongod service.
1.4 master node configuration
First stop the mongod service of master1:
[[email protected] ~] # systemctl stop mongod.service
Start the master node replication set function
[[email protected] ~] # vim /etc/mongod.conf
replSet = testSet #copy set name
replIndexPrefetch = _id_only
Restart the service:
[[email protected] ~] # systemctl start mongod.service
View:
[[email protected] ~] # mongo
1.5 master node (master1), replication set initialization
Get help with the copy command:
rs.help ()
Master node replication set initialization
> rs.initiate ()
Master node rs status:
> rs.initiate ()
{
"info2": "no configuration explicitly specified-making one",
"me": "master1.com:27017",
"ok": 1
}
testSet: OTHER>
testSet: PRIMARY> rs.status ()
{
"set": "testSet", #copy set name
"date": ISODate ("2017-01-16T14: 36: 29.948Z"),
"myState": 1,
"members": [
{
"_id": 0, #node ID
"name": "master1.com:27017", #node name
"health": 1, #node health status
"state": 1, #There is no state information
"stateStr": "PRIMARY", #node role
"uptime": 790, #Run time
"optime": Timestamp (1484577363, 1), #last oplog timestamp
"optimeDate": ISODate ("2017-01-16T14: 36: 03Z"), #last oplog time
"electionTime": Timestamp (1484577363, 2), #election time stamp
"electionDate": ISODate ("2017-01-16T14: 36: 03Z"), #election time
"configVersion": 1,
"self": true #is the current node
}
],
"ok": 1
}
Master node rs configuration:
testSet: PRIMARY> rs.conf ()
{
"_id": "testSet",
"version": 1,
"members": [
{
"_id": 0,
"host": "master1.com:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": {
},
"slaveDelay": 0,
"votes": 1
}
],
"settings": {
"chainingAllowed": true,
"heartbeatTimeoutSecs": 10,
"getLastErrorModes": {
},
"getLastErrorDefaults": {
"w": 1,
"wtimeout": 0
}
}
}
1.6 Master node adds slave node
testSet: PRIMARY> rs.add ("10.201.106.132")
{"ok": 1}
View
testSet: PRIMARY> rs.status ()
{
"_id": 1,
"name": "10.201.106.132:27017",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 35,
"optime": Timestamp (1524666332, 1),
"optimeDate": ISODate ("2018-04-25T14: 25: 32Z"),
"lastHeartbeat": ISODate ("2018-04-25T14: 26: 07.009Z"),
"lastHeartbeatRecv": ISODate ("2018-04-25T14: 26: 07.051Z"),
"pingMs": 0,
"configVersion": 2
}
],
master2 (from the node) view:
[[email protected] ~] # mongo
Encountered an error:
Error: listDatabases failed: {"note": "from execCommand", "ok": 0, "errmsg": "not master"}
Solution
Execute rs.slaveOk () method
View:
testSet: SECONDARY> show dbs
local 2.077GB
testdb 0.078GB
testSet: SECONDARY> use testdb
switched to db testdb
testSet: SECONDARY> db.students.findOne ()
{
"_id": ObjectId ("587c9032fe3baa930c0f51d9"),
"name": "student1",
"age": 1,
"address": "# 85 Wenhua Road, Zhengzhou, China"
}
See who is the master node:
testSet: SECONDARY> rs.isMaster ()
{
"setName": "testSet",
"setVersion": 2,
"ismaster": false,
"secondary": true,
"hosts": [
"master1.com:27017",
"10.201.106.132:27017"
],
"primary": "master1.com:27017", ###
"me": "10.201.106.132:27017", ###
"maxBsonObjectSize": 16777216,
"maxMessageSizeBytes": 48000000,
"maxWriteBatchSize": 1000,
"localTime": ISODate ("2018-04-25T14: 43: 35.956Z"),
"maxWireVersion": 3,
"minWireVersion": 0,
"ok": 1
}
Master node adds a third node (master3)
[[email protected] ~] # mongo
testSet: PRIMARY> rs.add ("10.201.106.133")
{"ok": 1}
The slave node (master3) is configured as an available node:
[[email protected] ~] # mongo
testSet: SECONDARY> rs.slaveOk ()
testSet: SECONDARY> use testdb
switched to db testdb
testSet: SECONDARY> db.students.findOne ()
{
"_id": ObjectId ("587c9032fe3baa930c0f51d9"),
"name": "student1",
"age": 1,
"address": "# 85 Wenhua Road, Zhengzhou, China"
}
Once the slave node is added, the slave node will automatically clone all the databases of the master node and start to copy the oplog of the master node, apply it locally and build an index for the collection.
1.7 View rs configuration
testSet: SECONDARY> rs.conf ()
1.8 master node write data, test synchronization
testSet: PRIMARY> db.classes.insert ({class: "One", nostu: 40})
View from the node:
testSet: SECONDARY> db.classes.findOne ()
{
"_id": ObjectId ("5ae09653f7aa5c90df36dc59"),
"class": "One",
"nostu": 40
}
Inserting data from slave nodes is prohibited:
testSet: SECONDARY> db.classes.insert ({class: "Tow", nostu: 50})
WriteResult ({"writeError": {"code": undefined, "errmsg": "not master"}})
1.9 The master node is down and the test is switched
Master node manually down
testSet: PRIMARY> rs.stepDown ()
Review the status, master3 has become the master node:
testSet: SECONDARY> rs.status ()
Looking at master3, the status has changed:
testSet: SECONDARY>
testSet: PRIMARY>
2.Other 2.1 View oplog size and synchronization time
testSet: PRIMARY> db.printReplicationInfo ()
configured oplog size: 1165.03515625MB
log length start to end: 390secs (0.11hrs)
oplog first event time: Wed Apr 25 2018 22:46:37 GMT + 0800 (CST)
oplog last event time: Wed Apr 25 2018 22:53:07 GMT + 0800 (CST)
now: Wed Apr 25 2018 23:32:35 GMT + 0800 (CST)
2.2 Modify the priority of master2 to become the master node first
The corresponding collection of rs.conf () is local.system.replset
local.system.replset.members [n] .priority
Need to operate on the master node ***
First import the configuration into the cfg variable
testSet: SECONDARY> cfg = rs.conf ()
Then modify the value (the ID number starts from 0 by default):
testSet: SECONDARY> cfg.members [1] .priority = 2
2
Reload configuration
testSet: SECONDARY> rs.reconfig (cfg)
{"ok": 1}
After reloading, master2 automatically becomes the master node, and master3 becomes the slave node.
2.3 Modify master3 to be a pure arbitration node
Need to be configured on the master node ***
You need to remove master3's slave role first:
testSet: PRIMARY> rs.remove ("10.201.106.133:27017")
{"ok": 1}
Modify master3 as the arbiter node:
testSet: PRIMARY> rs.addArb ("10.201.106.133")
{"ok": 1}
testSet: PRIMARY> rs.status ()
{
"_id": 2,
"name": "10.201.106.133:27017",
"health": 1,
"state": 7,
"stateStr": "ARBITER",
"uptime": 21,
"lastHeartbeat": ISODate ("2018-04-25T16: 06: 39.938Z"),
"lastHeartbeatRecv": ISODate ("2018-04-25T16: 06: 39.930Z"),
"pingMs": 0,
"syncingTo": "master1.com:27017",
"configVersion": 6
}
2.4 View slave information
testSet: PRIMARY> rs.printSlaveReplicationInfo ()
source: master1.com:27017
syncedTo: Thu Apr 26 2018 00:50:53 GMT + 0800 (CST)
0 secs (0 hrs) behind the primary
source: 10.201.106.133:27017
syncedTo: Wed Apr 25 2018 23:53:51 GMT + 0800 (CST)
3422 secs (0.95 hrs) behind the primary
three MongoDB sharding production environment, it is recommended that a pair of mongos servers be made highly available through keepalived, at least three config servers to implement the arbitration function, multiple shard nodes 1, shards (master1: mongos, master2: config server, master3, 4: shard ), Test environment 1.1 environmental preparation
Stop previous services:
[[email protected] ~] # systemctl stop mongod
[[email protected] ~] # systemctl stop mongod
[[email protected] ~] # systemctl stop mongod
Delete previous data:
[[email protected] ~] # rm -rf / mongodb / data / *
[[email protected] ~] # rm -rf / mongodb / data / *
[[email protected] ~] # rm -rf / mongodb / data / *
4 nodes time synchronization:
/ usr / sbin / ntpdate ntp1.aliyun.com
master4 install MongoDB
[[email protected] mongodb-3.0.0] # ls
mongodb-org-server-3.0.0-1.el7.x86_64.rpm mongodb-org-tools-3.0.0-1.el7.x86_64.rpm
mongodb-org-shell-3.0.0-1.el7.x86_64.rpm
[[email protected] mongodb-3.0.0] # yum install -y * .rpm
[[email protected] ~] # mkdir -pv / mongodb / data
[[email protected] ~] # chown -R mongod: mongod / mongodb /
1.2 First configure config-server (master2)
[[email protected] ~] # vim /etc/mongod.conf
#Note the previous replication set configuration
# replSet = testSet
# replIndexPrefetch = _id_only
dbpath = / mongodb / data
#Configure the node as config-server
configsvr = true
Start mongod:
[[email protected] ~] # systemctl start mongod
Listening port:
[[email protected] ~] # netstat -tanp | grep mongod
tcp 0 0 0.0.0.0:27019 0.0.0.0:* LISTEN 24036 / mongod
tcp 0 0 0.0.0.0:28019 0.0.0.0:* LISTEN 24036 / mongod
1.3 mongos (master1) configuration
Install mongos package:
[[email protected] mongodb-3.0.0] # yum install mongodb-org-mongos-3.0.0-1.el7.x86_64.rpm
Start by command, point to the config server, and run in the background:
[[email protected] ~] # mongos --configdb = 10.201.106.132 --fork --logpath = / var / log / mongodb / mongos.log
View the listening port:
[[email protected] ~] # netstat -tanp | grep mon
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 27801 / mongos
tcp 0 0 10.201.106.131:60956 10.201.106.132:27019 ESTABLISHED 27801 / mongos
tcp 0 0 10.201.106.131:60955 10.201.106.132:27019 ESTABLISHED 27801 / mongos
tcp 0 0 10.201.106.131:60958 10.201.106.132:27019 ESTABLISHED 27801 / mongos
tcp 0 0 10.201.106.131:60957 10.201.106.132:27019 ESTABLISHED 27801 / mongos
connection:
[[email protected] ~] # mongo
View the current shard status information:
mongos> sh.status ()
--- Sharding Status ---
sharding version: {
"_id": 1,
"minCompatibleVersion": 5,
"currentVersion": 6,
"clusterId": ObjectId ("5ae16bddf4bf9c27f1816692")
}
shards:
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{"_id": "admin", "partitioned": false, "primary": "config"}
1.4 Configuring shard nodes
master3 configuration:
[[email protected] ~] # vim /etc/mongod.conf
#Cancel the replication set configuration just now
# replSet = testSet
# replIndexPrefetch = _id_only
#Other configuration unchanged
dbpath = / mongodb / data
# bind_ip = 127.0.0.1
Start the service:
[[email protected] ~] # systemctl start mongod
master4:
[[email protected] ~] # vim /etc/mongod.conf
dbpath = / mongodb / data
#Note 127.0.0.1 configuration, the service will automatically listen to 0.0.0.0 address
# bind_ip = 127.0.0.1
Start the service:
[[email protected] ~] # systemctl start mongod
1.5 Add shard node to mongos (master1)
[[email protected] ~] # mongo
Add the first shard node
mongos> sh.addShard ("10.201.106.133")
{"shardAdded": "shard0000", "ok": 1}
View status
mongos> sh.status ()
Add a second shard node:
mongos> sh.addShard ("10.201.106.134")
{"shardAdded": "shard0001", "ok": 1}
1.6 Enable shard
Shards are collection-level, and non-fragmented collections are placed on the main shard.
testdb database shard function:
mongos> sh.enableSharding ("testdb")
{"ok": 1}
View status, testdb data already supports shard function
mongos> sh .status ()
--- Sharding Status ---
sharding version: {
"_id": 1,
"minCompatibleVersion": 5,
"currentVersion": 6,
"clusterId": ObjectId ("5ae16bddf4bf9c27f1816692")
}
shards:
{"_id": "shard0000", "host": "10.201.106.133:27017"}
{"_id": "shard0001", "host": "10.201.106.134:27017"}
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{"_id": "admin", "partitioned": false, "primary": "config"}
{"_id": "test", "partitioned": false, "primary": "shard0000"}
{"_id": "testdb", "partitioned": true, "primary": "shard0000"} # 主 shard
1.7 Collection enabled sharding
Students start sharding and assign based on the age index:
mongos> sh.shardCollection ("testdb.students", {"age": 1})
{"collectionsharded": "testdb.students", "ok": 1}
View:
mongos> sh .status ()
Create data
):
mongos> use testdb
switched to db testdb
mongos> for (i = 1; i <= 100000; i ++) db.students.insert ({name: "students" + i, age: (i% 120), classes: "class" + (i% 10), address: "www.magedu.com, MageEdu, # 85 Wenhua Road, Zhenzhou, China"})
Check the status, there are already 5 fragments, according to the age range:
mongos> sh.status ()
--- Sharding Status ---
sharding version: {
"_id": 1,
"minCompatibleVersion": 5,
"currentVersion": 6,
"clusterId": ObjectId ("5ae16bddf4bf9c27f1816692")
}
shards:
{"_id": "shard0000", "host": "10.201.106.133: 27017 "}
{"_id": "shard0001", "host": "10.201.106.134:27017"}
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
2: Success
databases:
{"_id": "admin", "partitioned": false, "primary": "config"}
{"_id": "test", "partitioned": false, "primary": "shard0000"}
{"_id": "testdb", "partitioned": true, "primary": "shard0000"}
testdb.students
shard key: {"age": 1}
chunks:
shard0000 3 ###
shard0001 2 ###
{"age": {"$ minKey": 1}}->> {"age": 2} on: shard0001 Timestamp (2, 0)
{"age": 2}->> {"age": 6} on: shard0001 Timestamp (3, 0)
{"age": 6}->> {"age": 54} on: shard0000 Timestamp (3, 1)
{"age": 54}->> {"age": 119} on: shard0000 Timestamp (2, 3)
{"age": 119}->> {"age": {"$ maxKey": 1}} on: shard0000 Timestamp (2, 4)
1.8 Viewing Shard Information
List how many shards
mongos> use admin
switched to db admin
mongos> db.runCommand ("listShards")
{
"shards": [
{
"_id": "shard0000",
"host": "10.201.106.133:27017"
},
{
"_id": "shard0001",
"host": "10.201.106.134:27017"
}
],
"ok": 1
}
Display cluster details:
mongos> db.printShardingStatus ()
shard help:
mongos> sh.help ()
1.9 View Equalizer
See if the equalizer is working (the system will start automatically when you need to rebalance, regardless of it):
mongos> sh.isBalancerRunning ()
false
View the current Balancer status:
mongos> sh.getBalancerState ()
true
Move the chunk (this operation will trigger config-server to update the metadata. It is recommended that you do not move the chunk manually):
mongos> sh.moveChunk ("testdb.students", {Age: {$ gt: 119}}, "shard0000")
MongoDB index and replication set, shard miscellaneous