How to generate a coredump file in RedhatLinux

Source: Internet
Author: User
In RedhatLinux, the coredump file is not generated by default, because there is such a line of ulimit-S-c0 in the etcprofile File & gt; devnull2 & gt; & amp; 1. how to enable coredump? The simplest method is to generate core dump files in Redhat Linux. by default, core dump files are not generated in Redhat Linux, this is because there is such a ulimit-S-c 0>/dev/null 2> & 1 in the/etc/profile File. how do I enable core dump? The simplest way is ~ /. Add ulimit-S-c unlimited>/dev/null 2> & 1 to the bash_profile. This allows the current user to generate a core dump file with no size limit. In addition, there are two ways to generate core dump through system-level modifications. The first method is to modify/etc/profile, change the row of ulimit to ulimit-S-c unlimited>/dev/null 2> & 1. then, the system allows all users to generate core dump files with no size limit. The advantage of this is that the system does not need to be restarted. The disadvantage is that it cannot be controlled and only allows some users to generate core dump files. The second method is to modify the/etc/security/limits. conf file. Many system limits can be changed by modifying this file, such as the maximum number of sub-processes and the maximum number of opened files. This file has a detailed description of how to modify the file. If you want to open core dump for all users, you can add a line * soft core 0. if you only want to open core dump for some users or user groups, you can add user soft core 0 or @ group soft core 0. Note that if you modify/etc/security/limits. open the conf file core dump, you also need to comment out the ulmit line in/etc/profile # ulimit-S-c 0>/dev/null 2> & 1. the advantage of this modification is that it can be applied to a specific user or a specific group. open the core dump file, the disadvantage is that the system needs to be restarted. Finally, let's take a look at the location where the core dump file is generated. the default location is in the same directory as the executable program. The file name is core. ***, where *** is a number. The mode of the core dump file name is saved in/proc/sys/kernel/core_pattern. the default value is core. Run the following command to change the location of the core dump file (for example, to be generated in the/tmp/cores directory) echo "/tmp/cores/core">/proc/sys/kernel/core_pattern core dump (kernel dump) usage this article is from the CSDN blog. For more information, see the source: http://blog.csdn.net/chaoi/archive/2007/07/16/1693149.aspx In Unix systems, applications crash and generally generate core files. it is very important to find the problem based on the core file and perform corresponding analysis and debugging, this article briefly introduces this. For example, a program cm_test_tool has an error while running and generates a core file, as shown below: -rw-r-1 root cmm_test_tool.c-rw-r-r-1 root cmm_test_tool.o-rwxr-xr-x 1 root cmm_test_tool-rw --- 1 root core.19344-rw --- 1 root core.19351-rw-r-1 root cmm_test_tool.cfg-rw-r-r-1 root cmm_test_tool.res-rw-r-r-1 root cm_test_tool.log [root @ AUTOTEST_SIM2 mam 2cm] # you can use the command gdb to find, parameter 1 is the name of the application, and parameter 2 is the core file. the result of running gdb cm_test_tool core.19344 is as follows: [root @ AU TOTEST_SIM2 mam 2cm] # gdb cm_test_tool core.19344GNU gdb Red Hat Linux (5.2.1-4) Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you arewelcome to change it and/or distribute copies of it under certain conditions. type "show copying" to see the conditions. there is absolutely no warranty for GDB. type "show warranty" for details. this G DB was configured as "i386-redhat-linux "... Core was generated by './cm_test_tool'. Program terminated with signal 11, Segmentation fault. Reading symbols from/lib/i686/libpthread. so.0... Done. Loaded symbols for/lib/i686/libpthread. sow.reading symbols from/lib/i686/libm. so.6... Done. Loaded symbols for/lib/i686/libm. so.6Reading symbols from/usr/lib/libz. so.1... Done. Loaded symbols for/usr/lib/libz. so.1Reading symbols from/usr/lib/libstdc ++. so.5... Done. Loaded symbols for/usr/lib/libstdc ++. so.5Reading symbols from/lib/i686/libc. so.6... Done. Loaded symbols for/lib/i686/libc. so.6Reading symbols from/lib/libgcc_s.so.1... Done. Loaded symbols for/lib/libgcc_s.so.1Reading symbols from/lib/ld-linux.so.2... Done. Loaded symbols for/lib/ld-linux.so.2Reading symbols from/lib/libnss_files.so.2... Done. loaded symbols for/lib/libnss_files.so.2 #0 0 × 4202cec1 in _ strtoul_internal () from/lib/i686/libc. so.6 (gdb) enter the gdb prompt, enter where, and locate the error location and stack, as shown in the following figure: (gdb) where #0 0 × 4202cec1 in _ strtoul_internal () from/lib/i686/libc. so.6 #1 0 × 4202d4e7 in strtoul () from/lib/i686/libc. so.6 #2 0 × 0804b4da in GetMaxIDFromDB (get_type = 2, max_id = 0 × 806fd20) at cm_test_tool.c: 788 #3 0 × 0804b9d7 in ConstrctVODProgram (vod _ Program = 0 × 40345bdc) at cm_test_tool.c: 946 #4 0 × 0804a2f4 in TVRequestThread (arg = 0 × 0) at cm_test_tool.c: 372 #5 0 × 40021941 in pthread_start_thread () from/lib/i686/libpthread. so.0 (gdb) now, we can see that the location of the file error is the function GetMaxIDFromDB. The two parameters are 2 and 0 × 806fd20 respectively. This function is located in the source code line 788. Based on this, we can find and solve the root cause of the problem. Linux c Segmentation faul1_debug segmentation fault, please following steps:> ulimit-c unlimited # run test image>. /. out # Segmenation fault happened. you will see the file "core. xxxx "(xxxx is pid) in the directory> gdb. out core. xxxx> (gdb) bt # Then you will see the segmentation fault point at the File xxxx. cpp: Line xxxx. for example, if I run this code under fedora, the file name is test. c # include int main () {ch Ar * p = NULL; * p = 10;} 1. compile gcc-ggdb test. c 2. enter the command ulimit-c unlimited 3. run the file. /. when out occurs, segmentation falut will generate a file core. xxxx (xxxx means pid) 4.gdb. out core. xxxx 5.gdb> bt will immediately output the file and number of lines where the error code is located, and print the wrong statement. Cited from: http://blog.tianya.cn/blogger/post_show.asp?BlogID=420361&PostID=7613747 Backtrace functions and stacks [B] Trace function call stacks in linux [/B] [B] [/B]. Generally, you can view the function runtime stack by using an external debugger such as GDB, however, in some cases, to analyze program bugs (mainly for analysis of long-running programs), it is very useful to print the function call stack when a program fails. In the header file "execinfo. h "indicates the three functions used to obtain the Function call stack of the current thread. Function: int backtrace (void ** buffer, int size): This Function is used to obtain the call stack of the current thread, the obtained information will be stored in the buffer, which is a pointer list. The size parameter is used to specify how many void * elements can be stored in the buffer. The return value of the function is the number of actually obtained pointers, and the maximum size of the pointer in the buffer is actually the return address obtained from the stack, each stack framework has a return address. Note that some compiler optimization options interfere with obtaining the correct call stack. In addition, inline functions do not have a stack framework; deleting the framework pointer will also cause failure to parse the stack content correctly. Function: char ** backtrace_symbols (void * const * buffer, int size) backtrace_symbols converts the information obtained from the backtrace Function into a string array. the buffer parameter should be an array pointer obtained from the backtrace function. size is the number of elements in the array (the return value of backtrace). The return value of the function is a pointer to a string array, its size is the same as that of buffer. each string contains a printable information relative to the corresponding elements in the buffer. it includes the function name, function offset address, and actual return address. Currently, only the ELF binary lattice is used. You can obtain the function name and offset address only after the program and difficulty. in other systems, only the return address in hexadecimal format can be obtained. in addition, you may need to pass the corresponding flag to the linker to support the function name function (for example, in a GNU ld system, you need to pass (-rdynamic )) the return value of this function is the space applied through the malloc function. Therefore, you must use the free function to release the pointer to call this function. note: If you cannot obtain sufficient space for a string, the return value of the function will be NULLFunction: void backtrace_symbols_fd (void * const * buffer, int size, int fd) backtrace_symbols_fd has the same function as the backtrace_symbols function. The difference is that it does not return a string array to the caller, but writes the result to the file with the file descriptor fd. each function corresponds to a row. it does not need to call the malloc function, so it is applicable to the situations where the function may fail to be called. the following example shows the three # Include/* Obtain a backtrace and print it to stdout. */voidprint_trace (void) {void * array [10]; size_t size; char ** strings; size_t I; size = backtrace (array, 10); strings = backtrace_symbols (array, size); printf ("Obtained % zd stack frames. \ n ", size); for (I = 0; I [I]); free (strings);}/* A dummy function to make the backtrace more interesting. */voiddummy_function (void) {pri Nt_trace () ;}intmain (void) {dummy_function (); return 0 ;}[/I] [I] remarks: void * const * buffer -- pointer of the buffer to a constant pointer of the char type (very writable) [/I] [B] use [/B] [B] backtrace [/B] [B] to solve major problems [/B] [B] ([/B] [B]] to [/B] [B]) [/B] The program jumps out without reservation when it gets an error message such as Segmentation fault. it is very painful to encounter such a problem, finding problems is no less difficult than writing code over N days. Is there a better way to obtain debugging information when generating the SIGSEGV signal? Let's take a look at the following routine! Sigsegv. h # ifndef _ sigsegv_h __# define _ sigsegv_h __# ifdef _ cplusplusextern "C" {# endif int setup_sigsegv (); # ifdef _ cplusplus} # endif/* _ sigsegv_h _ */sigsegv. c # define _ GNU_SOURCE # include # ifndef NO_CPP_DEMANGLE # include # endif # if defined (REG_RIP) # define SIGSEGV_STACK_IA64 # define REGFORMAT "% 016lx" # elif defined (REG_EIP) # define SIGSEGV _ STACK_X86 # define REGFORMAT "% 08x" # else # define SIGSEGV_STACK_GENERIC # define REGFORMAT "% x" # endifstatic void signal_segv (int signum, siginfo_t * info, void * ptr) {static const char * si_codes [3] = {"", "SEGV_MAPERR", "SEGV_ACCERR"}; size_t I; ucontext_t * ucontext = (ucontext_t *) ptr; # if defined (SIGSEGV_STACK_X86) | defined (SIGSEGV_STACK_IA64) int f = 0; Dl_info dlinfo; void ** bp = 0; void * ip = 0 ;# Else void * bt [20]; char ** strings; size_t sz; # endif fprintf (stderr, "Segmentation Fault! \ N "); fprintf (stderr," info. si_signo = % d \ n ", signum); fprintf (stderr," info. si_errno = % d \ n ", info-> si_errno); fprintf (stderr," info. si_code = % d (% s) \ n ", info-> si_code, si_codes [info-> si_code]); fprintf (stderr," info. si_addr = % p \ n ", info-> si_addr); for (I = 0; I fprintf (stderr," reg [% 02d] = 0x "REGFORMAT" \ n ", i, ucontext-> uc_mcontext.gregs [I]); # if defined (SIGSEGV_STACK_X86) | defined (SIGSEGV_ST ACK_IA64) # if defined (SIGSEGV_STACK_IA64) ip = (void *) ucontext-> uc_mcontext.gregs [REG_RIP]; bp = (void **) ucontext-> region [REG_RBP]; # elif defined (SIGSEGV_STACK_X86) ip = (void *) ucontext-> uc_mcontext.gregs [REG_EIP]; bp = (void **) ucontext-> uc_mcontext.gregs [REG_EBP]; # endif fprintf (stderr, "Stack trace: \ n"); while (bp & ip) {if (! Dladdr (ip, & dlinfo) break; const char * symname = dlinfo. dli_sname; # ifndef NO_CPP_DEMANGLE int status; char * tmp = _ cxa_demangle (symname, NULL, 0, & status); if (status = 0 & tmp) symname = tmp; # endif fprintf (stderr, "% 2d: % p (% s) \ n", ++ f, ip, symname, (unsigned) (ip-dlinfo. dli_saddr), dlinfo. dli_fname); # ifndef NO_CPP_DEMANGLE if (tmp) free (tmp); # endif if (dlinfo. dli_sname &&! Strcmp (dlinfo. dli_sname, "main") break; ip = bp [1]; bp = (void **) bp [0] ;}# else fprintf (stderr, "Stack trace (non-dedicated): \ n"); sz = backtrace (bt, 20); strings = backtrace_symbols (bt, sz); for (I = 0; I fprintf (stderr, "% s \ n", strings [I]); # endif fprintf (stderr, "End of stack trace \ n "); exit (-1);} int setup_sigsegv () {struct sigaction action; memset (& action, 0, sizeof (action); action. sa_sigactio N = signal_segv; action. sa_flags = SA_SIGINFO; if (sigaction (SIGSEGV, & action, NULL) perror ("sigaction"); return 0;} return 1 ;} # ifndef SIGSEGV_NO_AUTO_INITstatic void _ attribute (constructor) init (void) {setup_sigsegv () ;}# endifmain. c # include "sigsegv. h "# include int die () {char * err = NULL; strcpy (err," gonner "); return 0 ;}int main () {return die ();} compile the above main. c program to see what kind of information will be generated, but note that if you want If you reference sigsegv. h and sigsegv. c in sequence to obtain the stack information, add the-rdynamic-ldl parameter. /Data/codes/c/test/backtraces $ gcc-o test-rdynamic-ldl-ggdb-g sigsegv. c main. c/data/codes/c/test/backtraces $. /testSegmentation Fault! Info. si_signo = 11info. si_errno = 0info. si_code = 1 (SEGV_MAPERR) info. si_addr = (nil) reg [00] = 0x00000033reg [01] = 0x00000000reg [02] = 0xc010007breg [03] = 0x0000007breg [04] = 0x00000000reg [05] = 0xb7fc8ca0reg [06] = 07] = listen [08] = 0xb7f8cff4reg [09] = 0x00000001reg [10] = listen [11] = 0x00000000reg [12] = 0x0000000ereg [13] = 0x00000006reg [14] = 0x080489ecreg [15] = zero x 0000 0073reg [16] = 0x00010282reg [17] = 0xbff04c1creg [18] = 0x0000007bStack trace: 1: 0x80489ec (/data/codes/c/test/backtraces/test) 2: 0x8048a16 (/data/codes/c/test/backtraces/test) end of stack trace/data/codes/c/test/backtraces $ Use gdb to check the code about the error:/data/codes/c/test/backtraces $ gdb. /testgdb> disassemble die + 16 Dump of javaser code for function die: 0x080489dc: push % ebp0x080489dd: mov % esp, % ebp0x08 0489df: sub $0x10, % esp0x080489e2: movl $0x0, 0 xfffffffc (% ebp) 0x080489e9: mov 0 xfffffffc (% ebp), % signature: movl $ signature, (% eax) 0x080489f2: movw $0x1165, 0x4 (% eax) 0x080489f8: movb $0x0 0x6 (% eax) 0x080489fc: mov $0x0, % eax0x08048a01: leave 0x08048a02: ret End of assembler dump. gdb> You can also use break * die + 16 for debugging to check the stack before an error occurs. Next, let's take a look at what the code problem is. /Data/codes/c/test/backtraces $ gdb. /testgdb> break * die + 16 Breakpoint 1 at 0x80489f2: file main. c, line 6.gdb> list * die + 160x80489f2 is in die (main. c: 6 ). 1 # include "sigsegv. h "2 # include 3 4 int die () {5 char * err = NULL; 6 strcpy (err," gonner "); 7 return 0; 8} 9 10 int main () {gdb> now it's easy to locate the error. break is not required before list in the preceding debugging command, it's just that you can see which line of code break has actually pointed out to cause Segmentation fault. If you want to release your program, you will not attach debugging information to reduce the volume (that is, do not add the-ggdb-g parameter), but it doesn't matter, you can get the stack-trace information above, and you only need to add the debugging information before debugging.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.