MongoDB cluster refactoring, freeing up disk space
Because MongoDB does not reclaim the corresponding disk space after deleting some of the data, it frees up disk space by rebuilding the data directory. an experimental environment
A replica set is configured, which consists of the following three nodes:
10.192.203.201:27017 PRIMARY
10.192.203.202:27017 Secondary
10.192.203.202:10001 Arbiter Two experimental steps
2.1 Simulation Environment
Use DBA;
for (Var i=0;i<1000000;i++) Db.c.insert ({uid:i,uname: ' Osqlfan ' +i});
Db.c.find (). Count (); #1000000
db.stats ();
{
"db": "DBA",
"collections": 5,
"Objects": 1000111,
"avgobjsize": 111.9994880568257,
" DataSize ": 112011920,
" storagesize ": 174796800,
" numextents ": +,
" Indexes ": 3,
" Indexsize ": 32475072,
"fileSize": 469762048,
"nssizemb": +,
"Extentfreelist": {
"num": 0,
"totalsize ": 0
},
" Datafileversion ": {
" Major ": 4,
" minor ": $
},
" OK ": 1
}
Disk space increased by 400M data:
-RW-------. 1 root root 134217728 Nov 7 13:38 dba.1
-RW-------. 1 root root 268435456 Nov 7 13:38 dba.2
[Root@slave2 ~]# Du-sh/data/mongo/data
4.7g/data/mongo/data
#删除dba. C Table Data:
myreplset:primary> db.c.drop ();
True
myreplset:primary> Db.c.find (). Count ();
0
myreplset:primary> db.stats ();
{
"db": "DBA",
"Collections": 4,
"Objects": 108,
"avgobjsize": 108.44444444444444,
"datasize ": 11712,
" storagesize ": 61440,
" numextents ": 5,
" Indexes ": 2,
" indexsize ": 16352,
" FileSize ": 469762048,
" nssizemb ": +,
" Extentfreelist ": {
" num ":"
totalsize ": 212492288
},
"Datafileversion": {
"Major": 4,
"minor": $
},
"OK": 1
}
See datasize,indexsize,storagesize are smaller, but filesize unchanged, and MONGO data directory still occupy 4.7G.
2.2 Make sure to refactor from library 10.192.203.202:27017 first
#查看主从关系
Myreplset:primary>rs.status ();
{"Set": "Myreplset", "date": Isodate ("2016-11-07t07:10:50.717z"), "MyState": 1, "members": [{
"_id": 0, "name": "10.192.203.201:27017", "Health": 1, "State": 1, "Statestr": "PRIMARY", "uptime": 964, "optime ": Timestamp (1478239977, 594)," Optimedate ": Isodate (" 2016-11-04t06:12:57z ")," ele
Ctiontime ": Timestamp (1478502021, 1)," Electiondate ": Isodate (" 2016-11-07t07:00:21z "),
"ConfigVersion": 2, "Self": true}, {"_id": 1,
"Name": "10.192.203.202:27017", "Health": 1, "state": 2, "Statestr": "Secondary", "uptime": 628, "optime": Timestamp(1478239977, 594), "Optimedate": Isodate ("2016-11-04t06:12:57z"), "Lastheartbeat"
: Isodate ("2016-11-07t07:10:49.257z"), "Lastheartbeatrecv": Isodate ("2016-11-07t07:10:50.143z"), "Pingms": 2, "ConfigVersion": 2}, {"_id":
2, "name": "10.192.203.202:10001", "Health": 1, "state": 7, "Statestr": "Arbiter", "uptime": 618, "Lastheartbeat": Isodate (
"2016-11-07t07:10:49.416z"), "Lastheartbeatrecv": Isodate ("2016-11-07t07:10:49.847z"), "Pingms": 2, "ConfigVersion": 2}], "OK": 1}
2.2.1 Shutting down the database
myreplset:secondary> use admin;
Switched to DB admin
myreplset:secondary> db.shutdownserver ();
2016-11-07t15:14:42.548+0800 I NETWORK dbclientcursor::init call () failed
server should is down ...
2016-11-07t15:14:42.571+0800 I NETWORK trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-11-07t15 : 14:42.575+0800 W NETWORK Failed to connect to 127.0.0.1:27017, reason:errno:111 Connection refused
2016-11-07t15:14:42.575+0800 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) failedfailed couldn ' t connect to Server 127.0.0.1:27017 (127.0.0.1), Connectionattempt failed
2016-11-07t15:14:42.634+0800 I NETWORK trying Reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-11-07t15:14:42.637+0800 W NETWORK failed-Connect to 127.0.0.1:27017, reason:errno:111 Connection refused
2016-11-07t15:14:42.638+0800i NETWORK Reconnect 127.0.0.1:27017 (127.0.0.1) failed failed couldn ' t connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt Failed
2.2.2 Backup, delete, rebuild data directory
Back up the 10.192.203.202:27017 data directory, omitted here
After the backup is complete, delete and rebuild the directory.
Rm-rf/data/mongo/data
Mkdir/data/mongo/data 2.2.3 Startup Database
To start the 10.192.203.202:27017 process:
/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset myreplset-rest 2.2.4 Check
Check that the database is healthy and that the previous database exists.
Check that the disk space is down.
After inspection, the space shrinks to 4.3G and shrinks by 400MB.
2.3 Refactoring the main library
2.3.1 Switching master-Slave relations
Since 201 is the master, it is necessary to switch the master-slave relationship between 201 and 202:27,017. In addition to the quorum node, this experiment has only one slave node. If there is more than one node, the remaining slave nodes must be
Execution: Rs.freeze (300); (lock from, so that it does not turn into the main library)
In 10.192.203.201:27017 execution: Rs.stepdown (30); (Downgrade it)
--freeze () and Stepdown units are all seconds.
Rs.status () to see if the master-slave relationship is switched. 2.3.2 shutting down the database
To stop the 10.192.203.201:27017 process:
Myreplset:secondary>use admin;
Switched to DB admin
Myreplset:secondary> Db.shutdownserver (); 2.3.3 Backup Delete, rebuild its data directory
Backup slightly.
Rm-rf/data/mongo/data
Mkdir/data/mongo/data 2.3.4 Startup Database
To start the 10.192.203.201:27017 process:
/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset myreplset-rest 2.3.4 Check
Check that the database is healthy and that the previous database exists.
Check that the disk space is down.
After inspection, the space shrinks to 4.3G and shrinks by 400MB.
--The quorum node does not need to be refactored.
After the refactoring is complete, you can switch back to the original master-slave state.