An alarm raised by a lararel queue

Source: Internet
Author: User
A server alarm, memory consumption is too high, it is strange that the cluster of other servers are no problem. However, from the experience of the past: every strange problem behind, there is a ridiculous answer hidden.

First through the "free-m" to confirm the memory situation, found that the use of 6893M, but also left 976M:

Free

Then through "top" to see which processes consume more memory, sorted by memory by "shift + m":

Top

Although we can confirm that the system is running out of available memory through the free command, we have not found a process where the memory is occupied by the top command, so where does the memory go?

At the beginning we mentioned that there is only one server in the cluster problem, the other servers are normal, so we compared the problem server and the normal server process list, the results found that the problem server more than a few processes:

/usr/local/bin/php Artisan Queue:listen

/usr/local/bin/php Artisan Queue:work

After confirming that they are Laravel queues, although the intuition tells us that the problem is associated with it, but the process itself does not occupy much memory, in the absence of immediate diagnosis of the cause, we use the exclusion of the queue to another normal server to see if it will reproduce the problem, after a while, Sure enough, the same thing happened again.

Since free,top and other commands cannot confirm the whereabouts of the memory, we might as well look at "meminfo":

Meminfo

As shown, a large amount of memory is consumed by the Slab, further said to be consumed by sreclaimable, that is, memory is consumed by some recyclable Slab, further information can be obtained through "slabtop":

Slabtop

A lot of memory is dentry consumption, if you also like me, do not know what it means, search it, can turn over the wall with Google, can not flip the wall with AOL, anyway do not use Baidu, I found the following introduction:

    • Linux Server cache takes up too much memory to troubleshoot system out-of-memory problems

    • Linux Server cache takes up too much memory to troubleshoot system memory problems (cont.)

In short, the memory Dentry cache the most recently accessed file information, if the frequent operation of a large number of files, then the dentry will continue to increase, so the problem becomes the confirmation Laravel queue has no similar problem.

As mentioned earlier, the Laravel queue has a listen process, there is a work process, from the name we can determine that the former is the main process, the latter is a child process, the child process is the process of working, but when I directly strace tracking sub-process, but it prompts me that the child process does not exist, Further debugging found that the original child process will continue to restart!

Since we're not good at tracking child processes directly, you might want to start by tracing the file operations of the child process from the parent process:

Shell> Strace-f-E trace=open,close,stat,unlink-p $ (    ps aux | grep "[q]ueue:listen" | awk ' {print $} ')

Unfortunately Laravel itself is claimed to be the master frame, relying on a lump of files, so the tracking results are flooded with a lot of framework file itself normal Open,close,stat operation, instead of only tracking Unlik operation try:

Shell> Strace-f-E trace=unlink-p $ (    ps aux | grep "[q]ueue:listen" | awk ' {print $} ')

The Laravel queue is found to perform a delete file operation frequently, and one delete is performed once per reboot of the child process:

Unlink ("/tmp/. zendsem.axaa3z ")

Unlink ("/tmp/. Zendsem.teqg0y ")

Unlink ("/tmp/. Zendsem.bn3ien ")

Unlink ("/tmp/. Zendsem.v4s8rx ")

Unlink ("/tmp/. Zendsem.pnnutn ")

Because the names of the temporary files are different, a large number of dentry caches are consumed. Looking at the documentation for the Laravel queue, it is found that the Laravel queue actually provides a daemon mode that does not restart, so that a large number of temporary files are not created frequently, which in turn does not consume a large amount of dentry cache and is recommended.

If the frequent creation of a large number of temporary files can not be avoided, then according to the description of the Linux documentation, we can set drop_caches to remove recyclable slab (including dentries and inodes), more coarse:

shell> Echo 2 >/proc/sys/vm/drop_caches

In addition, you can increase the recovery probability by setting vfs_cache_pressure greater than 100来, which is more gentle:

shell> echo >/proc/sys/vm/vfs_cache_pressure

From the test results, the role of Vfs_cache_pressure is limited, of course, it may be my posture is not correct. There is some information that min_free_kbytes can also solve such problems, but in view of the risk of this parameter, I suggest you do not change, here is not much to say, interested in self-access to the relevant information.

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.