Linux Debugging Call Trace Dump_stuck__linux

Source: Internet
Author: User
Tags taint tainted rpmbuild


Call Trace can print the current stack of function calls. kernel state Call Trace

There are three types of error in the kernel state, namely bugs, oops and panic.

Bugs are minor errors, such as calling sleep during spin_lock, causing a potential deadlock problem, and so on.

Oops represents a user process error and needs to kill the user process. At this point, if the user process consumes some signal locks, these signal locks will never be released, which can result in potential instability in the system.

Panic is a serious error that represents the entire system crash.

OOPS

First, we introduce the treatment of the oops situation. When Linux oops, it enters the die function in TRAPS.C.

int die (const char *str, struct pt_regs *regs, long Err)

。。。

Show_regs (regs);

In the void Show_regs (struct pt_regs * regs) function, the Show_stack function is invoked, which prints the system's kernel-state stack.

The specific principle is:

Find the current stack from the register, and in the stack pointer there will be a stack pointer to the call function at the top level, which goes back to the stack at the top level, and so on.

In the PowerPC Eabi standard, the current stack bottom (note that the bottom of the stack, not the top of the stack, the address of the frame header) is stored in the register GPR1. In the stack space that the GPR1 points to, the first DWORD is the frame header pointer (back Chain word) that calls the function at the top level, and the second DWORD is the return address (LR Save Word) of the current function in the previous function. This way, the entire call dump is completed by backtracking up the first level. In addition to this method, the built-in function __builtin_frame_address function should theoretically be able to be used, although it is not seen in the kernel. (The 2.6.29 ftrace module uses the __builtin_return_address function).

When you call trace, the Show_regs function only prints the information in the stack with PRINTK. If the current system does not have a terminal, then the kernel needs to be modified to save the stack information to other places according to the requirements.

For example, you can make a space in the system's flash to store the information for printing. Then, write a kernel module and add a callback function to the die function. Thus, whenever the callback function is invoked, a custom kernel module is notified, in which the call stack and other interesting information can be saved to the dedicated flash space. One thing to note here is that the kernel may be unstable when oops, so in order to ensure that the information is correctly written to Flash, try not to use interrupts in the function of writing flash and use the round robin method. In addition, the signal volume, sleep, and so on may cause blocking functions also do not use.

In addition, because the Oops system is still running, so you can send a message (signal, netlink, etc.) to the user space, to inform the user space to do some information collection work.

Panic

Panic, Linux is in the most serious error state, marking the entire system is not available, that is, interrupt, process scheduling, etc. have been stopped, but the stack has not been destroyed. Therefore, the stack backtracking in oops can be used theoretically. The PRINTK function can also be used because it is not blocked. User state Call Trace

The user program can call trace in the following situations to facilitate debugging:

I get a signal when the program crashes. The Linux system automatically prints call trace when it receives certain signals.

L Add checkpoints to the user program, similar to the assert mechanism, and executes call trace if the checkpoint condition is not satisfied.

Call Trace in the user state is the same as the kernel state and satisfies the Eabi standard, as follows:

In the GNU standard, there is a built-in function __builtin_frame_address. This function returns the stack base (Frame Header) Pointer of the current execution context (also a pointer to the back Chain word), which is used to get the current call stack. In this call stack, there will be a stack-bottom pointer to the upper-level call function, which then goes back to the previous call stack. And so on, complete the call dump process.

After you get the address of the function, you can get the function name from the symbol table. If you are a function defined in a dynamic library, you can also dladdr the dynamic library information of the function by extending the function.


Linux Kernel bug:soft lockup cpu#1 stuck analysis 1. Online Kernel BUG log

Kernel:modules linked in:fuse ipv6 power_meter bnx2 sg microcode serio_raw itco_wdtitco_vendor_support hpilo HPWDT i7core _edac edac_core shpchp ext4 mbcache jbd2sd_mod crc_t10dif HPSA Radeon Ttm drm_kms_helper DRM I2c_algo_bit I2c_coredm_mirro R Dm_region_hash Dm_log Dm_mod [last Unloaded:scsi_wait_scan]

kernel:pid:5483, Comm:master not tainted 2.6.32-220.el6.x86_64 #1

Kernel:calltrace:

Kernel:[<ffffffff81069b77>]? Warn_slowpath_common+0x87/0xc0

Kernel:[<ffffffff81069bca>]? warn_slowpath_null+0x1a/0x20

Kernel:[<ffffffff810ea8ae>]? rb_reserve_next_event+0x2ce/0x370

Kernel:[<ffffffff810eab02>]? ring_buffer_lock_reserve+0xa2/0x160

Kernel:[<ffffffff810ec97c>]? Trace_buffer_lock_reserve+0x2c/0x70

Kernel:[<ffffffff810ecb16>]? trace_current_buffer_lock_reserve+0x16/0x20

Kernel:[<ffffffff8107ae1e>]? Ftrace_raw_event_hrtimer_cancel+0x4e/0xb0

Kernel:[<ffffffff81095e7a>]? Hrtimer_try_to_cancel+0xba/0xd0

Kernel:[<ffffffff8106f634>]? do_setitimer+0xd4/0x220

Kernel:[<ffffffff8106f88a>]? alarm_setitimer+0x3a/0x60

Kernel:[<ffffffff8107c27e>]? sys_alarm+0xe/0x20

Kernel:[<ffffffff8100b308>]? Tracesys+0xd9/0xde

Kernel:---[end trace 4d0a1ef2e62cb1a2]---

2. Kernel soft deadlock (soft lockup) bug cause analysis

Soft Lockup name explanation: So-called, Soft lockup that is to say, this bug did not let the system completely panic, but several processes (or kernel thread) were locked in a certain state (generally in the kernel area), in many cases this is due to the use of kernel lock.

The Linux kernel has a monitoring process for each CPU, which is called watchdog (watchdog) in the technology world. Through Ps–ef | The grep watchdog can see that the process name is probably watchdog/x (number: CPU logic number 1/2/3/4, etc.). This process or thread runs every second, otherwise it will sleep and standby. This process collects the time that each CPU is running with data and is stored in the kernel data structure of each CPU. There are a number of specific interrupt functions in the kernel. These interrupt functions call the soft lockup count, and he uses the current timestamp in contrast to the time saved in the kernel data structure of a particular (corresponding) CPU, and if the current timestamp is found to be longer than the set threshold for the corresponding CPU, He assumes that the monitoring process or watchdog thread has not been in a considerable time yet. Why does the CPU soft lock produce, is how to produce. If the Linux kernel is a carefully designed CPU scheduling access, then how to produce a soft CPU deadlock. Then can only say because of user development or Third-party software introduction, see our server kernel panic is caused by qmgr process. Because each infinite loop will always have a CPU execution process (the Qmgr process shows a Message Queuing service process for a background message), and has a certain degree of priority. The CPU scheduler dispatches a driver to run, and if the driver has problems and is not detected, the driver will take up the CPU for a long time. According to the previous description, the watchdog process captures this and throws a soft deadlock (soft lockup) error. A soft deadlock will suspend the CPU and make your system unusable.

If the problem is caused by a process or thread in user space backtrace is not content, if the kernel thread then displays backtrace information in the soft lockup message. 3. According to the Linux kernel source analysis error

Analyze the specific cause of the error message that was thrown by our first kernel and call trace (the Linux kernel tracking subsystem).

First, according to our CentOS version to install the corresponding Linux kernel source code, the specific steps are as follows:

(1) Download the source of the RPM package kernel-2.6.32-220.17.1.el6.src.rpm

(2) Install the corresponding dependent Library, command: Yuminstall rpm-build redhat-rpm-config AsciiDoc newt-devel

(3) Installation source package: rpm-ikernel-2.6.32-220.17.1.el6.src.rpm

(4) Access to the directory of source code: CD~/RPMBUILD/SPECS

(5) Build source directory: RPMBUILD-BP--target= ' uname-m ' Kernel.spec

Below begins the real analysis of the source code according to kernel bug log:

(1) The first phase of the kernel error log analysis (time at the Dec 4 14:03:34 This phase of the log Output code analysis, in fact, this part of the code will not cause the CPU soft deadlock, mainly the second phase of the error log display causes the CPU soft deadlock)

We first navigate to the relevant source code through the log: see the following log: Dec 4 14:03:34 bp-yzh-1-xxxx kernel:warning:atkernel/trace/ring_buffer.c:1988 rb_reserve_ next_event+0x2ce/0x370 () (not tainted)

Based on the contents of the log we can easily navigate to kernel/trace/ring_buffer.c the 1988 lines of code for this file are as follows: warn_on (1).

First, briefly explain the role of warn_on: warn_on just print out the current stack information, not panic. So you'll see a lot of stack information back there. This macro is defined as follows:

#ifndef warn_on

#defineWARN_ON (condition) ({\

int __ret_warn_on =!!                             (condition); \

if (unlikely (__ret_warn_on)) \

__warn (); \

Unlikely (__ret_warn_on); \

})

#endif

The macro is simple enough to guarantee that the values passed in are 0 or 1 (two logical-non-operational results), and then use the branch prediction technique (which guarantees that a branch with a high probability of execution is close to the previous instruction) to determine whether the __warn () macro definition needs to be invoked. If the condition is met the __warn () macro definition is followed by an empty instruction;. The WARN_ON macro called above is 1 passed, so the __warn () is executed. Let's continue to look at the __warn () macro definition as follows:

#define __warn () warn_slowpath_null (__file__,__line__)

From the next call trace message we did find that the Warn_slowpath_null function was invoked. By searching for the implementation of this function in the Linux kernel source code, it is found that the implementation of the PANIC.C (kernel panic-related function implementation) is as follows:

Voidwarn_slowpath_null (const char *file, int line)

{

Warn_slowpath_common (file, line,__builtin_return_address (0),

Taint_warn, NULL);

}

Export_symbol (warn_slowpath_null);/All out of this symbol so that other modules can use this function

Again, we see the Warn_slowpath_common function, and in call Trace, the function is printed before the Warn_slowpath_null function, again confirming that the process is correct. Also in panic.c this file I found the implementation of Warn_slowpath_common this function is as follows:

Static Voidwarn_slowpath_common (const char *file, int line, void *caller,

unsigned taint, struct Slowpath_args *args)

{

const char *board;

PRINTK (kern_warning "------------[Cut here]------------\ n");

PRINTK (kern_warning "Warning:at%s:%d%ps () (%s) \ n",

file, line, caller, print_tainted ());

board = dmi_get_system_info (dmi_product_name)//Get DMI System Information

if (board)

PRINTK (kern_warning "Hardware name:%s\n", board)//through our log information we can find that our hardware name is ProLiant DL360 G7

if (args)

VPRINTK (ARGS->FMT, Args->args);

Print_modules ()//Print System module information

Dump_stack ();//dump information output (call trace starts)

Print_oops_end_marker ()//print Oops End

Add_taint (taint);

}

Analysis of the implementation of this function is not difficult to find a lot of our log information from here to start output, including printing some system information, do not continue in-depth analysis (see code comments, which call the relevant functions to print the corresponding information, through my analysis of the implementation of these functions and our log information can fully correspond, which DUMP_ Stack is related to CPU architecture, our server should be the x86 system. Here we continue to analyze the implementation of the Dump_stack function, because this is related to the CPU architecture, and this function directly reflects the related processes that are causing the kernel panic. This function is implemented as follows:

/*

* The Architecture-independent Dump_stackgenerator

*/

void Dump_stack (void)

{

unsigned long stack;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.