JDB2 results in high disk IO utilization

Source: Internet
Author: User
Tags system log

A few days ago encountered Jbd2 process to occupy a large amount of disk Io, with iotop to see the situation is generally as follows:

System version: Centos6.5-64bit

650) this.width=650; "src=" Http://image.lxway.com/upload/6/75/67554ca6f1ecf4c1667509de64c19089_thumb.jpg "alt=" JDB2 results in high disk IO usage "title=" JDB2 results in high disk IO usage "style=" border:0px;vertical-align:middle;height:auto; "/>

650) this.width=650; "src=" Http://image.lxway.com/upload/5/e2/5e2f76f486141161dae522fd8607fe25_thumb.jpg "alt="       JDB2 results in high disk IO usage "title=" JDB2 results in high disk IO usage "style=" border:0px;vertical-align:middle;height:auto; "/> A bug in the Ext4 file system has been checked:

650) this.width=650; "src=" Http://image.lxway.com/upload/2/46/24678aa1df9d6c342a5e0c22eaf2364d_thumb.jpg "alt=" JDB2 results in high disk IO usage "title=" JDB2 results in high disk IO usage "style=" border:0px;vertical-align:middle;height:auto; "/>

Give the solution first, and the priority for handling this issue is:

1, yum update kernel with Yum upgrade system kernel, restart after the check whether it is valid;

2, mitigation methods: Modify the commit value, reduce the number of file system submissions or disable the barrier feature;

The recommended file system parameter is: defaults,noatime,nodiratime,barrier=0,data=writeback,commit=60

(Can be re-mounted by modifying the Fstab table or remount)

3, use caution method: Close the file system log function Tune2fs-o "^has_journal" for example: Tune2fs-o "^has_journal"/dev/mapper/volgroup-lv_home

-----------------------------------------------------------------------

By checking the information, sorting out the relevant information, explained as follows;

1, JBD full spell is journaling block driver file system log function, Jbd2 is the Ext4 file system version, it is certain that the operation of the file system is too frequent, resulting in IO pressure is too large.

The common file system uses the log function to guarantee the integrity of the file system, and the metadata is written to the log before the new data is written to the disk, which guarantees that if an error occurs before or after the actual data is written,

Log functionality will easily roll back to the previous state, ensuring that file system crashes do not occur;

2, and now the disk is generally equipped with internal cache, in order to re-adjust the batch data write order, optimize the write performance, so the file system must be written to disk after log data to write commit (Commit=xx every xx seconds to synchronize all the data and metadata. The default is every 5 seconds) record, if the commit record is written earlier, and the log is likely to be corrupted, it will affect the data integrity; The EXT4 file system enables the barrier feature by default, so that after the barrier data is written to disk, the data after barrier can be written ; The existence of barrier guarantees data integrity;

3. The disadvantage of using the barrier feature is that the cost of performance reduction is required; You can disable this feature by mounting the Mount-o barrier=0.

Can be enabled by looking at the barrier value of 1 in/proc/mounts

4, the file write and request will cause one of the values of the int is increasing, and finally beyond its own range---become negative, it will trigger the bug;

Refer to the following links for specific principles:

http://blog.donghao.org/2013/03/20/%E4%BF%AE%E5%A4%8Dext4%E6%97%A5%E5%BF%97%EF%BC%88jbd2%EF%BC%89bug/

5, so we can reduce the file system submission times to alleviate the IO pressure (related parameter commit), or disable the barrier feature (related parameter barrier=0);

I am enclosing the commit modification method, but my modification is not obvious. Modify commit is not a complete solution, continue to study!

Mount-o Remount,commit=60/data

And considering this bug has been put forward on the internet, it seems that there are some temporary repair patches, so long past, the official kernel should also do the corresponding update; So when you come across this problem, the first suggestion is

Upgrade the next kernel (before upgrading the recommended data backup), reboot to see if it takes effect!


This article is from the "Dream to Reality" blog, please be sure to keep this source http://lookingdream.blog.51cto.com/5177800/1791734

JDB2 results in high disk IO utilization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.