Android Native/tombstone Crash Log Detailed analysis (RPM)

Source: Internet
Author: User



Transferred from: HTTP://WEIBO.COM/P/230418702C2DB50102VC2H


Android has been a few years, but the NDK's opening speed is very slow, so the current network on the Android Nativecrash analysis of the relatively few, especially the very detailed analysis method is more difficult to query. As a result, most programmers helpless when they encounter crashlog that are difficult to addr2line. In fact, the other parts of the log also provide a very rich information for interpretation, so here to summarize some of the experience in this area, the androidsamples in the Hello-jni as a reference to make a certain change produced by the crash to analyze the explanation. After an in-depth understanding of the error log analysis, many bugs that are difficult to replicate or rarely reproduce can be effectively addressed. All of the following are Nightingale originals. The main content is divided into several parts:
    • 1.Library Symbols (symbols for shared libraries)
    • 2.Analyze Tools (analysis tools available)
    • 3.crashlog–header
    • 4.crashlog–backtrace (Formost crashes)
    • 5.crashlog–registers
    • 6.crashlog–memory
    • 7.crashlog–stack
    • 8.Library base Address (shared library in-memory base address)
1.Library Symbols (symbols for shared libraries)The NDK provides tools that allow programmers to get directly to the wrong file, function, and number of rows. But this part of the tool requires a shared library that is not signed (usually on out/target/product/xxx/ symbols/system/lib). The shared library in the Out/target/product/xxx/system/lib is removed from the symbol, so the lib that is captured directly from the device is not able to find the corresponding symbol by tool (and the space occupied by the cubby without going to symbol is much larger). So if you want to analyze a nativecrash, then unstrippedlib is almost indispensable, but even the Strip library will contain a small number of symbol.


2.Analyze Tools  


That is commonly used auxiliary tools 

01 addr2line ($ (ANDROID_NDK) \ toolchains \ arm-linux-androideabi-4.7 \ prebuilt \ windows \ bin)

02 #Query the corresponding symbol through the address provided in the backtrace column, and you can locate the file, function, and line number.
03 Usage: addr2line --aCfe libs $ (trace_address)
04
05 ndk-stack (android-ndk-r8d \ ndk-stack)
06 #It is equivalent to executing multiple addr2line, which can be directly used for a crash log, and it will output all the symbols corresponding to the addresses in the backtrace.
07 Usage: ndk-stack --sym $ (lib_directory) --dump $ (crash_log_file)
08
09 objdump (android-ndk-r8d \ toolchains \ arm-linux-androideabi-4.7 \ prebuilt \ windows \ bin)
10 #Dump the objectfile. Locate the cause of the error through assembly code. Most complex problems can be solved in this way.
11 Usage: objdump -S $ (objfile)> $ (output_file) 3. Crash Log-Header header, which contains information about the current system version. If you are doing platform-level development, this will help locate the current system. Development version. 1 Time: 2014-11-28 17:40:52
2 Builddescription: xxxx
3 Build: xxxx
4 Hardware: xxxx
5 Revision: 0
6 Bootloader: unknown
7 Radio: unknown
8 Kernel: Linux version 3.4.5 xxxx This part is easier to read. So I won't go into details. 4.CrashLog-Backtrace (Formost crashes) is the most commonly used to look at the backtrace part. The address of the backtrace can be found using addr2line or ndk-stack to find the corresponding symbol. 01 backtrace:
02 # 00 pc 00026fbc /system/lib/libc.so
03 # 01 pc 000004cf /data/app-lib/com.example.hellojni-1/libhello-jni.so(Java_com_example_hellojni_HelloJni_stringFromJNI+18)
04 # 02 pc 0001e610 /system/lib/libdvm.so(dvmPlatformInvoke+112)
05 # 03 pc 0004e015 /system/lib/libdvm.so (dvmCallJNIMethod (unsignedint const *, JValue *, Method const *, Thread *) + 500)
06 # 04 pc 00050421 /system/lib/libdvm.so(dvmResolveNativeMethod(unsigned int const *, JValue *, Methodconst *, Thread *) + 200)
07 # 05 pc 000279e0 /system/lib/libdvm.so
08 # 06 pc 0002b934 /system/lib/libdvm.so (dvmInterpret (Thread *, Method const *, JValue *) + 180)
09 # 07 pc 0006175f /system/lib/libdvm.so (dvmInvokeMethod (Object *, Method const *, ArrayObject *, ArrayObject *, ClassObject *, bool) +374)
10 # 08 pc 00069785 /system/lib/libdvm.so
11 # 09 pc 000279e0 /system/lib/libdvm.so
12 # 10 pc 0002b934 /system/lib/libdvm.so (dvmInterpret (Thread *, Method const *, JValue *) + 180)
13 # 11 pc 00061439 /system/lib/libdvm.so (dvmCallMethodV (Thread *, Method const *, Object *, bool, JValue *, std :: __ va_list) +272)
14 # 12 pc 0004a2ed /system/lib/libdvm.so
15 # 13 pc 0004d501 /system/lib/libandroid_runtime.so
16 # 14 pc 0004e259 /system/lib/libandroid_runtime.so(android::AndroidRuntime::start(char const *, charconst *) + 536)
17 # 15 pc 00000db7 / system / bin / app_process
18 # 16 pc 00020ea0 /system/lib/libc.so(__libc_init+64)
19 # 17 pc 00000ae8 / system / bin / app_process From the backtrace above, you can see that it contains a pc address and the symbol behind it. Some errors can be found by looking only at the symbols here. And if you want more accurate positioning, you need to use the ndk tool. 1 $ addr2line-aCfe out / target / production / xxx / symbols / system / lib / libhello-jni.so4cf
2 0x4cf
3 java_com_example_hellojni_HelloJni_stringFromJNI
4 /ANDROID_PRODUCT/hello-jni/jni/hello-jni.c:48 Then look at hello-jni.c 01
17 #include
18 #include
19
20
26 voidfunc_a (char * p);
27 voidfunc_b (char * p);
28 voidfunc_a (char * p)
29 {
30 const char * A = "AAAAAAAAA"; // len = 9
31 char * a = "dead";
32 memcpy (p, A, strlen (A));
33 memcpy (p, a, strlen (a));
34 p [strlen (a)] = 0;
35 func_b (p);
36}
37 voidfunc_b (char * p)
38 {
39 char * b = 0xddeeaadd;
40 memcpy (b, p, strlen (p));
41}
42
43 jstring
44 Java_com_example_hellojni_HelloJni_stringFromJNI (JNIEnv * env,
45 jobject thiz)
46 {
47 char buf [10];
48 func_a (buf);
49 return (* env)-> NewStringUTF (env, "Hello from JNI!");
50} It can be seen that only errors can be seen in func_a (). There is a special place in this is why there is only func_a in the backtrace and not func_b. This is the processing part of the compiler, but I will repeat it. So now we can only confirm from the backtrace that # 1 is in func_a, and then # 0 is a function in libc that died. In fact, symbols / system / lib also contains libc.so, which can be confirmed through addr2line. And only memcpy is called to libc here, so it can be basically determined that the error occurred in memcpy, but there are three memcpy. (Of course, it can be found in func_b by directly checking the code) 5. CrashLog-Registers register information, you can basically determine why the system is wrong through this part of information. 01 pid: 4000, tid: 4000, name: xample.hellojni
02 signal11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addrddeeaadd
03 r0 ddeeaadd r1 beab238c r2 00000004 r3beab2390
04 r4 4012b260 r5 40e1b760 r6 00000004 r74bdd2ca0
05 r8 beab23a8 r9 4bdd2c98 sl 40e1d050 fpbeab23bc
06 ip 80000000 sp beab2380 lr 518254d3 pc 400dffbc cpsr 80000010
07 d0 4141414141414164 d1 6e6a6f6c6c656865
08 d2 3133393766666661 d3 726f6c6f632f3c64
09 d4 3e2d2d206f646f54 d5 6f633c202020200a
10 d6 656d616e20726f6c d7 3f8000003f800000
11 d8 0000000000000000 d9 0000000000000000
12 d10 0000000000000000 d11 0000000000000000
13 d12 0000000000000000 d13 0000000000000000
14 d14 0000000000000000 d15 0000000000000000
15 d16 000000000000019e d17 000000000000019e
16 d18 0000000000000000 d19 000000e600000000
17 d20 e600000000000000 d21 0000000000000000
18 d22 0000000000000000 d23 090a0b0c0d0e0f10
19 d24 0000004d0000003d d25 000000e600000000
20 d26 000000e7000000b7 d27 0000000000000000
21 d28 0000004d0000003d d29 0000000000000000
22 d30 0000000100000001 d31 0000000100000001
23 scr 60000090 This part of the information shows the operating status when an error occurs. The current interruption is due to the receipt of SIGSEGV (usually the crash is due to receiving this signal, and a few are due to SIGFPE, that is, the division by 0 operation). The error code is SEGV_MAPERR, a common segmentation fault. Then the error address is ddeeaadd. That is, the address of line 39 is 0xddeeadd. So it can be basically determined that it is related to pointer b. The next operation in the code is memcpy. So it is obvious that there is a problem with memcpy here. Look at r0 is ddeeaadd, r1 is beab238c, r2 is 4, which In fact, these three registers represent the operating parameters of memcpy. The destination address is ddeeaadd, the source address plus offset is beab238c, and the length is 4. Here it is mentioned that beab238c is the source address plus offset, the reason will be explained later. Usually the registers we need to focus on are r0 to pc. The following 32 registers are usually used for data access, and sometimes there is important information, but in general it is not too concerned. If you do n’t know much about this part, do n’t worry, just look at it and understand it naturally. I have n’t touched this aspect before trying and interpreting it. 6.CrashLog-The memory log also provides the adjacent memory information in the register address when the error occurred, and the amount of information is also very rich. It was mentioned earlier that r1 is related to the source address, so first look at the memory situation near r1 (0xbeab238c) 1 memorynear r1:
2 beab236c 4f659a18 51825532518254a5 df0027ad
3 beab237c 00000000 ddeeaadd518254d3 64616564
4 beab238c 41414100 41714641a8616987 40e1d040
5 beab239c 4c11cb40 40e1d04040a2f614 4bdd2c94
6 beab23ac 00000000 4171460800000001 417093c4
7 beab23bc 40a5f019 4bdd2c94518215a3 518254bd beab238c is on the fourth line, but note that at the end of the third line there is a string of ASCII-like characters, 64616564, which is dead, and from here, a section of memory is 64616564 41414100 41714641, which is "64,65,61 , 64, 00,41,41,41,41 "647141. In fact, it is not difficult to find that this is dead ‘\ 0’ AAAA. The value on the stack is not initialized and will be random. So the starting address of p in func_b should start from the position of 64616564. As for why r1 is beab238c, it can be easily found by reading the assembly code. In the binoc implementation used in Android, find the source file as memcpy.s (the file path and line number can be found through addr2line). See the error point at memcpy.s +248. This part of the source code is as follows:
The general meaning of these two segments is to read 4 bytes from the r1 address and put them into d0 ~ d3, the r1 address increases, and then the data in d0 ~ d3 is stored in the r0 address, and r0 also increases. Now you can go back and look at the last byte of registers d0 ~ d3, which are 64,65,61,64 respectively. "Dead". So the current r1 is the address after the increase. At this time, an attempt was made to write data to the invalid address 0xddeeaadd at r0, so an error occurred. And it shows that the error address is 0xddeeaadd. Objdump. Here, let's mention the part of objdump again. Can be used for shared libraries (.so) or for object files (.o). If the shared library is large, it is better to use the object file of the compiled file. Generally speaking, the Android compilation will save the target file by default, stored in the out / target / product / xxxx / obj directory, so now find libhello-jni.o and view its information through objdump. jstring
Java_com_example_hellojni_HelloJni_stringFromJNI (JNIEnv * env,
                                         jobject thiz)
{
  a: 447c add r4, pc
  c: 6824 ldr r4, [r4, # 0]
  e: 6821 ldr r1, [r4, # 0]
 10: 9103 str r1, [sp, # 12]
   char buf [10];
   func_a (buf);
 12: f7ff fffe bl 0 << / span> Java_com_example_hellojni_HelloJni_stringFromJNI>
   return (* env)-> NewStringUTF (env, "Hello from JNI!");
 16: 6828 ldr r0, [r5, # 0]
 18: 4907 ldr r1, [pc, # 28]; (38 << / span> Java_com_example_hellojni_HelloJni_stringFromJNI + 0x38>)
 1a: f8d0 229c ldr.w r2, [r0, # 668]; 0x29c
 1e: 4628 mov r0, r5
 20: 4479 add r1, pc
 22: 4790 blx r2
} Don't pay too much attention to symbols such as ‘Java_com_example_hellojni_HelloJni_stringFromJNI’, ‘{’, ’}’, it only provides us with approximate location information, and is not exactly equivalent to the code segment in C language. Previously, I saw (Java_com_example_hellojni_HelloJni_stringFromJNI + 18) such information through backtrace # 1, which converted +18 into hexadecimal 0x12. Then the file location corresponding to the dump is the above 12. The instruction is bl 0. This is a common Jump instruction. It can also be seen from the source code that func_a () is called. Then look at the code of func_b: voidfunc_b (char * p)
{
  0: b510 push {r4, lr}
  2: 4604 mov r4, r0
  4: f7ff fffe bl 0 << / span> strlen>
  8: 4621 mov r1, r4
  a: 4602 mov r2, r0
  c: 4802 ldr r0, [pc, # 8]; (18 << / span> func_b + 0x18>)
}
  e: e8bd 4010 ldmia.w sp !, {r4, lr}
 12: f7ff bffe b.w 0 << / span> memcpy>
 16: bf00 nop
 18: ddeeaadd .word 0xddeeaadd first put r0 (the value of p pointer) into r4, call strlen, the return value is defaulted to r0 (value 4), then r4 is taken out into r1, and then taken from the position of pc + 8 Put the address into r0 (you can see func_b + 0x18 is 0xddeeaadd), then jump to memcpy. So r0 is ddeeaadd, r1 is the value of the p pointer, and r4 is the length. As a result, memcpy was called, and then an error occurred. Through objdump, you can usually further determine the situation of the error, which is very helpful to the logic of the tracking code, so in many cases you can solve the problem just by reading the code, and you don't need to add debug prints and try to copy it. . 7.CrashLog-Stack When backtrace information is very small (no full function call stack), this is the point. The Stack column provides information about the thread's call stack. You can roughly guess the location of the error from some symbols on the right. But because the contents on the stack may leave uninitialized or unempty information, or store other data, it sometimes causes some confusion. Therefore, although most of the symbols on the stack are the symbols of this call stack, they are not necessarily all. stack:
       beab2340 4012ac68
       beab2344 50572968
       beab2348 4f659a50
       beab234c 0000002f
       beab2350 00000038
       beab2354 50572960
       beab2358 beab2390 [stack]
       beab235c 4012ac68
       beab2360 00000071
       beab2364 400cb528 /system/lib/libc.so
       beab2368 00000208
       beab236c 4f659a18
       beab2370 51825532 /data/app-lib/com.example.hellojni-1/libhello-jni.so
       beab2374 518254a5 /data/app-lib/com.example.hellojni-1/libhello-jni.so(func_a+56)
       beab2378 df0027ad
       beab237c 00000000
   # 00 beab2380 ddeeaadd
       beab2384 518254d3 /data/app-lib/com.example.hellojni-1/libhello-jni.so(Java_com_example_hellojni_HelloJni_stringFromJNI+22)
   # 01 beab2388 64616564 The stack is from bottom to top (frame # 02-> # 01-> # 00). Now you can roughly see that from # 01 to # 00, enter Java_com_example_hellojni_HelloJni_stringFromJNI into func_a. However, it is not possible to directly add the target symbol to addr2line through the address on the left. It is a relative address in memory. Next, I will mention how to calculate the available addr2line address by relative address. 8. Library Base Address (base address of the shared library in memory) The available addr2line address is calculated from the address. addr2line requires a shared library without symbols. When the code has not changed, the symbol position of the .so generated each time should be the same. So if you want to get a valid symbol, you must use the unsocketed .so corresponding to the program runtime. When JNI is running, you can see that there is a load_library action in java. This action can be roughly regarded as loading a library file into memory. Therefore, this library has a loaded base address in memory, but depending on the memory and the corresponding algorithm, the base address may be different each time. The address required by addr2line is an absolute address relative to the shared library. Therefore, as long as the base address of the shared library in memory can be obtained, there is a way to calculate the available addr2line address from the address on the stack. In the stack and backtrace information above, there are the relative and absolute addresses of the two symbols (Java_com_example_hellojni_HelloJni_stringFromJNI + 22) and (Java_com_example_hellojni_HelloJni_stringFromJNI + 18). So the calculation of the base address should be the subtraction of the corresponding address: 0x518254d3-0x000004cf-0x4 = 0x51825000. To verify the validity of the base address, you can try to calculate the sign of 0x518254a5 (func_a + 56): 0x518254a5- 0x51825000 = 0x4a5. Then use addr2line to query 0x4a5 to get hello-jni.c: 34. In addition, there is another method to calculate the available address, which also needs the individual symbol information provided in the stack: Example 0x518254a5 (func_a + 56), and then it was mentioned that objdump can directly use .so as input. At this time, Will come out the assembly information of the entire lib. Then you can find information like "0xxxxxxxx <func_a>:" 0xxxxxx represents the address of the function in lib. Here it is "0x46c <func_a>:", and then adding 0x38 (56) equals 0x4a4. This is different from the previous one because the function stored on the stack will be The return address, but the instruction pointed to is the same. The issue of base address is to further explain the difference between the address in the stack and the address in the backtrace, and the existence form of the instruction that the shared library is loaded into memory, but it can also be found by comparison that when the loaded library is very large ( For example, 100M +) Obtaining a usable address in the former method is much simpler than the latter method. In most cases this should not be the way to calculate the base address. However, there are also incomplete backtraces given by individual log messages, making it difficult to parse out specific problems. At this time, you need to use the base address calculation method to get the available addr2line address. In the end, as long as there is an information file similar to the error log, most problems can usually be solved. Then if it is running, you can use gdb (if the option to open the corefile is better), or kill-9 (also need to turn on the compilation option). There is also a debuggerd command that can be used in Android system. Details can be viewed online. Finally attach the source code of this test: http://vdisk.weibo.com/s/yVmhF5M5tTuIi
Android Native / Tombstone Crash Log detailed analysis (turn)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.