MongoDB cluster refactoring, freeing up disk space
Since MongoDB deletes part of the data, it does not reclaim the disk space, so it frees up disk space by rebuilding the data directory.
An experimental environment
A replica set is configured that consists of the following three nodes:
10.192.203.201:27017 PRIMARY
10.192.203.202:27017 Secondary
10.192.203.202:10001 Arbiter
Two experimental steps
2.1 Simulation Environment
Use DBA;
for (Var i=0;i<1000000;i++) Db.c.insert ({uid:i,uname: ' Osqlfan ' +i});
Db.c.find (). Count (); #1000000
db.stats ();
{"
db": "DBA",
"collections": 5,
"Objects": 1000111,
"avgobjsize": 111.9994880568257,
" DataSize ": 112011920,
" storagesize ": 174796800," numextents ":" Indexes ":
3,
" Indexsize ": 32475072,
"fileSize": 469762048, "
nssizemb": "
extentfreelist": {
"num": 0,
"totalsize ": 0
},
" Datafileversion ": {
" Major ": 4,
" minor ": +
},
" OK ": 1
}
Disk space has increased 400M of data:
-RW-------. 1 root 134217728 Nov 7 13:38 dba.1
-RW-------. 1 root 268435456 Nov 7 13:38 dba.2
[Root@slave2 ~]# Du-sh/data/mongo/data
4.7g/data/mongo/data
#删除dba. C Table Data:
myreplset:primary> db.c.drop ();
True
myreplset:primary> Db.c.find (). Count ();
0
myreplset:primary> db.stats ();
{"
db": "DBA",
"Collections": 4,
"Objects": 108,
"avgobjsize": 108.44444444444444,
"datasize ": 11712,
" storagesize ": 61440,
" numextents ": 5,
" Indexes ": 2,
" indexsize ": 16352,
" FileSize ": 469762048,
" Nssizemb ":"
extentfreelist ": {
" num ":
" totalsize ": 212492288
},
"Datafileversion": {
"Major": 4,
"minor": +
},
"OK": 1
}
See datasize,indexsize,storagesize are smaller, but filesize unchanged, and the MONGO data directory still occupies 4.7G.
2.2 Ensure that you first refactor from the library 10.192.203.202:27017
#查看主从关系
Myreplset:primary>rs.status (); {"Set": "Myreplset", "date": Isodate ("2016-11-07t07:10:50.717z"), "MyState": 1, "members": [{] _id ": 0," name ":" 10.192.203.201:27017 "," Health ": 1," state ": 1," statestr ":" PRIMARY "," uptime ": 964," Optime ": Timestamp (1478239977, 594)," optimedate ": isodate ("2016-11-04t06:12:57z"), "Electiontime": Timestamp (1478502021, 1), "Electiondate": Isodate ("2016-11
-07t07:00:21z ")," ConfigVersion ": 2," Self ": true}, {" _id ": 1,
"Name": "10.192.203.202:27017", "Health": 1, "state": 2, "statestr": "Secondary", "Uptime": 628, "Optime": Timestamp (1478239977, 594), "Optimedate": Isodate ("2016-11-04t06: 12:57z ")," Lastheartbeat ": Isodate (" 2016-11-07t07:10:49.257z ")," LASTHEARTBEATRECV ": ISodate ("2016-11-07t07:10:50.143z"), "Pingms": 2, "ConfigVersion": 2}, {" _id ": 2," name ":" 10.192.203.202:10001 "," Health ": 1," state ": 7," Statestr " : "Arbiter", "uptime": 618, "Lastheartbeat": Isodate ("2016-11-07t07:10:49.416z"), "lasth Eartbeatrecv ": Isodate (" 2016-11-07t07:10:49.847z ")," Pingms ": 2," ConfigVersion ": 2}],"
OK ": 1}
2.2.1 Closes the database
myreplset:secondary> use admin;
Switched to DB admin myreplset:secondary> db.shutdownserver ();
2016-11-07t15:14:42.548+0800 I Network Dbclientcursor::init call () failed server should is down ... 2016-11-07t15:14:42.571+0800 I Network trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed 2016-11-07t15:14:42.575+ 0800 W Network Failed to-127.0.0.1:27017, reason:errno:111 Connection refused I N Etwork Reconnect 127.0.0.1:27017 (127.0.0.1) failedfailed couldn ' t connect to server 127.0.0.1:27017 (127.0.0.1), connect Ionattempt failed 2016-11-07t15:14:42.634+0800 I network trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed 2016-11-0 7t15:14:42.637+0800 W Network Failed to connect to 127.0.0.1:27017, reason:errno:111 Connection-refused: 42.638+0800i Network Reconnect 127.0.0.1:27017 (127.0.0.1) failed failed ' t connect to server couldn ( 127.0.0.1), connection attempt failed
2.2.2 Backup, delete, rebuild data directory
Backup 10.192.203.202:27017 's data directory, omitted here
After the backup is complete, delete and rebuild the directory.
Rm-rf/data/mongo/data
Mkdir/data/mongo/data
2.2.3 Start the database
To start the 10.192.203.202:27017 process:
/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset Myreplset-rest
2.2.4 Inspection
Check that the database is normal and that all previous databases exist.
Check to see if the disk space is shrinking.
After inspection, the space reduced to 4.3G, contracted 400MB.
2.3 Refactoring the main library
2.3.1 Switching master-slave relationship
Since 201 is the master, it is necessary to switch between the master and slave relationships between 201 and 202:27,017. In this experiment, except for the quorum node, there is only one node. If there are multiple nodes, it needs to be in the rest from the node
Execution: Rs.freeze (300); (locked from, so that it will not be converted to the main library)
Execute at 10.192.203.201:27017: Rs.stepdown (30); (Demote it)
--freeze () and Stepdown units are both seconds.
Rs.status () to see if the master/subordinate relationship has been switched over.
2.3.2 Shut down the database
To stop the 10.192.203.201:27017 process:
Myreplset:secondary>use admin;
Switched to DB admin
Myreplset:secondary> Db.shutdownserver ();
2.3.3 Backup Delete, rebuild its data directory
Backup slightly.
Rm-rf/data/mongo/data
Mkdir/data/mongo/data
2.3.4 Start the database
To start the 10.192.203.201:27017 process:
/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset Myreplset-rest
2.3.4 Inspection
Check that the database is normal and that all previous databases exist.
Check to see if the disk space is shrinking.
After inspection, the space reduced to 4.3G, contracted 400MB.
--The quorum node does not need to be refactored.
Once the refactoring is complete, you can switch back to the original master-slave state.
Thank you for reading, I hope to help you, thank you for your support for this site!