Solution to Ceph cluster disk with no available space

Source: Internet
Author: User

Solution to Ceph cluster disk with no available space
Fault description

During use of the OpenStack + Ceph cluster, because the Virtual Machine crashes into a large amount of new data, the cluster disk is quickly consumed, there is no free space, the virtual machine cannot operate, and all operations of the Ceph cluster cannot be performed.

Fault symptom
  • An error occurred while trying to restart the VM using OpenStack.
  • An error occurred while trying to directly use the rbd command to delete the block.
[root@controller ~]# rbd -p volumes rm volume-c55fd052-212d-4107-a2ac-cf53bfc049be2015-04-29 05:31:31.719478 7f5fb82f7760  0 client.4781741.objecter  FULL, paused modify 0xe9a9e0 tid 6
  • View ceph health status
cluster 059f27e8-a23f-4587-9033-3e3679d03b31 health HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean; recovery 7482/129081 objects degraded (5.796%); 2 full osd(s); 1 near full osd(s) monmap e6: 4 mons at {node-5e40.cloud.com=10.10.20.40:6789/0,node-6670.cloud.com=10.10.20.31:6789/0,node-66c4.cloud.com=10.10.20.36:6789/0,node-fb27.cloud.com=10.10.20.41:6789/0}, election epoch 886, quorum 0,1,2,3 node-6670.cloud.com,node-66c4.cloud.com,node-5e40.cloud.com,node-fb27.cloud.com osdmap e2743: 3 osds: 3 up, 3 in        flags full  pgmap v6564199: 320 pgs, 4 pools, 262 GB data, 43027 objects        786 GB used, 47785 MB / 833 GB avail        7482/129081 objects degraded (5.796%)             300 active+clean              20 active+degraded+remapped+backfill_toofull

HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean; recovery 7482/129081 objects degraded (5.796%); 2 full osd(s); 1 near full osd(s)pg 3.8 is stuck unclean for 7067109.597691, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.7d is stuck unclean for 1852078.505139, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.21 is stuck unclean for 7072842.637848, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.22 is stuck unclean for 7070880.213397, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.a is stuck unclean for 7067057.863562, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.7f is stuck unclean for 7067122.493746, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.5 is stuck unclean for 7067088.369629, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.1e is stuck unclean for 7073386.246281, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.19 is stuck unclean for 7068035.310269, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.5d is stuck unclean for 1852078.505949, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.1a is stuck unclean for 7067088.429544, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.1b is stuck unclean for 7072773.771385, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.3 is stuck unclean for 7067057.864514, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.15 is stuck unclean for 7067088.825483, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.11 is stuck unclean for 7067057.862408, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.6d is stuck unclean for 7067083.634454, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.6e is stuck unclean for 7067098.452576, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.c is stuck unclean for 5658116.678331, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.e is stuck unclean for 7067078.646953, current state active+degraded+remapped+backfill_toofull, last acting [2,0]pg 3.20 is stuck unclean for 7067140.530849, current state active+degraded+remapped+backfill_toofull, last acting [0,2]pg 3.7d is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.7f is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.6d is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.6e is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.5d is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.20 is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.21 is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.22 is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.1e is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.19 is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.1a is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.1b is active+degraded+remapped+backfill_toofull, acting [0,2]pg 3.15 is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.11 is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.c is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.e is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.8 is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.a is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.5 is active+degraded+remapped+backfill_toofull, acting [2,0]pg 3.3 is active+degraded+remapped+backfill_toofull, acting [2,0]recovery 7482/129081 objects degraded (5.796%)osd.0 is full at 95%osd.2 is full at 95%osd.1 is near full at 93%
Solution 1 (verified)

Adding an OSD node is also recommended in the official documentation. After a new node is added, Ceph begins to rebalance the data and the space used by OSD begins to decrease.

2015-04-29 06:51:58.623262 osd.1 [WRN] OSD near full (91%)2015-04-29 06:52:01.500813 osd.2 [WRN] OSD near full (92%)
Solution 2 (theoretically, verification is not performed)

If there is no new hard disk, you can only use another method. In the current status, Ceph does not allow any read/write operations, so any Ceph command is not easy at this time. The solution is to reduce the Ceph's full ratio definition, from the preceding log, we can see that the ratio of Ceph full is 95%. What we need to do is to increase the ratio of Ceph full, and then try to delete the data as soon as possible to decrease the ratio.

  • The Ceph cluster does not re-Synchronize the data and may still need to restart the service itself.
ceph mon tell \* injectargs '--mon-osd-full-ratio 0.98'
  • Modify the configuration file and then restart the monitor service. However, if you are worried about problems, you are not sure to try this method. After confirmation in the mail list, this method should not affect data, however, the premise is that during recovery, all virtual machines do not write any data to Ceph.

By default, the ratio of "full" is 95%, while that of "near full" is 85%. Therefore, you need to adjust the configuration according to the actual situation.

[global]    mon osd full ratio = .98    mon osd nearfull ratio = .80
  • 1
Analyze the reason for summary

According to the description in the Ceph official document, when an OSD full ratio reaches 95%, the cluster will not accept any requests from the Ceph Client to read and write data. Therefore, the VM cannot be started when it is restarted.

Solution

From the official recommendation, we should support adding new OSD methods. Of course, temporary increase is a solution, but it is not recommended because you need to manually delete the data to solve the problem, in addition, once a new node becomes faulty, the proportion of the node becomes full. Therefore, it is best to scale up the node.

Thoughts

In the course of this fault, there are two points worth thinking about:

  • Monitoring: At that time, due to DNS configuration errors during the server configuration process, the monitoring email cannot be sent normally, so no Ceph WARN prompt message is received.
  • Cloud Platform itself: Due to Ceph mechanism, most of the distribution in the OpenStack platform is ultra-high. From the user's perspective, copying a large amount of data is not inappropriate, however, this problem occurs because the cloud platform does not have an alert mechanism.
References
  • Http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity

Install the distributed storage system Ceph on CentOS 7.1

Ceph environment configuration document PDF

Deploying Ceph on CentOS 6.3

Ceph Installation Process

HOWTO Install Ceph On FC12 and FC Install Ceph Distributed File System

Ceph File System Installation

CentOS 6.2 64-bit installation of Ceph 0.47.2

Ubuntu 12.04 Distributed File System (Ceph)

Install Ceph 0.24 on Fedora 14

Ceph details: click here
Ceph: click here

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.