MongoDB cluster refactoring, freeing up disk space

Source: Internet
Author: User
Tags config dba mkdir mongodb

MongoDB cluster refactoring, freeing up disk space

Since MongoDB deletes part of the data, it does not reclaim the disk space, so it frees up disk space by rebuilding the data directory. an experimental environment

A replica set is configured that consists of the following three nodes:

10.192.203.201:27017 PRIMARY

10.192.203.202:27017 Secondary

10.192.203.202:10001 Arbiter Two experimental steps
2.1 Simulation Environment

Use DBA;
for (Var i=0;i<1000000;i++) Db.c.insert ({uid:i,uname: ' Osqlfan ' +i});
Db.c.find (). Count (); #1000000
 db.stats ();
{"
	db": "DBA",
	"collections": 5,
	"Objects": 1000111,
	"avgobjsize": 111.9994880568257,
	" DataSize ": 112011920,
	" storagesize ": 174796800," numextents ":" Indexes ":
	3,
	" Indexsize ": 32475072,
	"fileSize": 469762048, "
	nssizemb": "
	extentfreelist": {
		"num": 0,
		"totalsize ": 0
	},
	" Datafileversion ": {
		" Major ": 4,
		" minor ": +
	},
	" OK ": 1
}

Disk space has increased 400M of data:

-RW-------. 1 root 134217728 Nov 7 13:38 dba.1

-RW-------. 1 root 268435456 Nov 7 13:38 dba.2

[Root@slave2 ~]# Du-sh/data/mongo/data

4.7g/data/mongo/data

 

#删除dba. C Table Data:
myreplset:primary> db.c.drop ();
True
myreplset:primary> Db.c.find (). Count ();
0
myreplset:primary> db.stats ();
{"
db": "DBA",
"Collections": 4,
"Objects": 108,
"avgobjsize": 108.44444444444444,
"datasize ": 11712,
" storagesize ": 61440,
" numextents ": 5,
" Indexes ": 2,
" indexsize ": 16352,
" FileSize ": 469762048,
" Nssizemb ":"
extentfreelist ": {
" num ":
" totalsize ": 212492288
  },
"Datafileversion": {
"Major": 4,
"minor": +
},
"OK": 1
}

See datasize,indexsize,storagesize are smaller, but filesize unchanged, and the MONGO data directory still occupies 4.7G.

2.2 Ensure that you first refactor from the library 10.192.203.202:27017

#查看主从关系

Myreplset:primary>rs.status ();
                    {"Set": "Myreplset", "date": Isodate ("2016-11-07t07:10:50.717z"), "MyState": 1, "members": [{
                    "_id": 0, "name": "10.192.203.201:27017", "Health": 1, "State": 1, "Statestr": "PRIMARY", "uptime": 964, "optime ": Timestamp (1478239977, 594)," Optimedate ": Isodate (" 2016-11-04t06:12:57z ")," ele
                    Ctiontime ": Timestamp (1478502021, 1)," Electiondate ": Isodate (" 2016-11-07t07:00:21z "),
                    "ConfigVersion": 2, "Self": true}, {"_id": 1,
                    "Name": "10.192.203.202:27017", "Health": 1, "state": 2, "Statestr": "Secondary", "uptime": 628, "optime": Timestamp(1478239977, 594), "Optimedate": Isodate ("2016-11-04t06:12:57z"), "Lastheartbeat"
                    : Isodate ("2016-11-07t07:10:49.257z"), "Lastheartbeatrecv": Isodate ("2016-11-07t07:10:50.143z"), "Pingms": 2, "ConfigVersion": 2}, {"_id":
                    2, "name": "10.192.203.202:10001", "Health": 1, "state": 7, "Statestr": "Arbiter", "uptime": 618, "Lastheartbeat": Isodate (
                    "2016-11-07t07:10:49.416z"), "Lastheartbeatrecv": Isodate ("2016-11-07t07:10:49.847z"), "Pingms": 2, "ConfigVersion": 2}], "OK": 1}

2.2.1 Shut down the database

myreplset:secondary> use admin;
Switched to DB admin
myreplset:secondary> db.shutdownserver ();
2016-11-07t15:14:42.548+0800 I Network  dbclientcursor::init call () failed
server should is down ...
2016-11-07t15:14:42.571+0800 I Network  trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-11-07t15 : 14:42.575+0800 W Network  Failed to connect to
127.0.0.1:27017 reason:errno:111 Connection 2016-11-07t15:14:42.575+0800 I Network  reconnect 127.0.0.1:27017 (127.0.0.1) failedfailed couldn ' t connect to Server 127.0.0.1:27017 (127.0.0.1), Connectionattempt failed
2016-11-07t15:14:42.634+0800 I network  trying Reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-11-07t15:14:42.637+0800 W network failed to  connect to 127.0.0.1:27017, reason:errno:111 Connection refused
2016-11-07t15:14:42.638+0800i network  Reconnect 127.0.0.1:27017 (127.0.0.1) failed failed couldn ' t connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt Failed

2.2.2 Backup, delete, rebuild data directory

Backup 10.192.203.202:27017 's data directory, omitted here

After the backup is complete, delete and rebuild the directory.

Rm-rf/data/mongo/data

Mkdir/data/mongo/data 2.2.3 Start the database

To start the 10.192.203.202:27017 process:

/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset myreplset-rest 2.2.4 Check

Check that the database is normal and that all previous databases exist.

Check to see if the disk space is shrinking.

After inspection, the space reduced to 4.3G, contracted 400MB.

2.3 Refactoring the main library
2.3.1 Switching master-slave relationship

Since 201 is the master, it is necessary to switch between the master and slave relationships between 201 and 202:27,017. In this experiment, except for the quorum node, there is only one node. If there are multiple nodes, it needs to be in the rest from the node

Execution: Rs.freeze (300); (locked from, so that it will not be converted to the main library)

Execute at 10.192.203.201:27017: Rs.stepdown (30); (Demote it)

--freeze () and Stepdown units are both seconds.

Rs.status () to see if the master/subordinate relationship has been switched over. 2.3.2 shut down the database

To stop the 10.192.203.201:27017 process:

Myreplset:secondary>use admin;

Switched to DB admin

Myreplset:secondary> Db.shutdownserver (); 2.3.3 Backup Delete, rebuild its data directory

Backup slightly.

Rm-rf/data/mongo/data

Mkdir/data/mongo/data 2.3.4 Start the database

To start the 10.192.203.201:27017 process:

/USR/LOCAL/MONGODB/BIN/MONGOD--CONFIG/USR/LOCAL/MONGODB/MONGOD.CNF--replset myreplset-rest 2.3.4 Check

Check that the database is normal and that all previous databases exist.

Check to see if the disk space is shrinking.

After inspection, the space reduced to 4.3G, contracted 400MB.

--The quorum node does not need to be refactored.

Once the refactoring is complete, you can switch back to the original master-slave state.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.