Exploit Linux Kernel Slub Overflow

Source: Internet
Author: User

Exploit Linux Kernel Slub OverflowBy wzt


I. Preface

In recent years, the research on kernel exploit has been quite popular. Common kernel Elevation of Privilege vulnerabilities can be divided into several categories:
Null Pointer Reference, Kernel stack overflow, Kernel slab overflow, arbitrary kernel address writability, and so on. Comparison of Null Pointer Reference Vulnerabilities
It is easy to exploit. Typical examples include sock_sendpage and udp_sendmsg. However, the security module of the new kernel is no longer
The code ing allowed for userspace is low memory, so NULL pointer dereference was once only dos,
You cannot raise the permission. But the CVE-2010-4258 kernel's arbitrary address Write Vulnerability can be used to replace null pointer dereference
To escalate permissions. Compared with Stack Overflow in userspace, Kernel stack overflow is better than exploit. The most difficult exploit
Is the slab overflow of the kernel. When slab overflows in 05 years, UNF's qow.ashi wrote paper to explain
The exploit method of slab. Since then, the slab overflow research has been concentrated on the 2.4 kernel, and slab overflow under 2.6 is
I didn't see any related paper sharing.

In kernel 2.6.22, in order to improve slab performance, kernel introduced the slub design. For slub
Overflow paper has not been shared until Jon Oberheide released an exploit for slub overflow of the CAN protocol,
This should be the first public exploit that exploits slab overflow on 2.6kernel, in ubuntu-10.04 2.6.32
. Jon Oberheide also has an article on his blog about analyzing slub overflow paper,
Because of the advantages of the CAN code, this exploit does not reflect the essence of slub overflow. Go deep
Based on the exploit study, and with my experience in debugging 2.4 Kernel slab overflow, I studied slub overflow.
The technology was successfully tested in the centos 5.4 + 2.6.32 environment.


Ii. Sample Code:

To facilitate debugging, I wrote an LKM module and added a system call to the kernel.
Api.

-- Code -------------------------------------------------------------------------
# Define BUFFER_SIZE 80

Asmlinkage long kmalloc_overflow_test (char * addr, int size)
{
Char * buff = NULL;

Buff = kmalloc (BUFFER_SIZE, GFP_KERNEL );
If (! Buff ){
Printk ("kmalloc failed .");
Return-1;
}
Printk ("[+] Got object at 0x % p", buff );

If (copy_from_user (buff, addr, size )){
Printk ("copy_from_user failed .");
Kfree (buff );
Return-1;
}
Printk ("% s", buff );

Return 0;
}
-------------------------------------------------------------------------------

This Code uses kmalloc to allocate 80 bytes of space, but does not check the size.
The size of 80 will cause Kernel Heap Overflow.


Iii. SLUB Structure

Slub greatly simplifies the data structure of slab, such as removing the complete
Full queue. At the beginning of each slab, no slab management structure and kmem_bufctl_t array for empty obj management are available. One
The slab Structure managed by slub is as follows:

Structure of an slab:

+ ------------------------------------------- +
| Obj |... | obj |
+ ------------------------------------------- +

Based on the above code snippet, after an obj overflows, the dirty data will directly overwrite the adjacent obj:

| First | second |
+ ------------------------------------------- +
| Obj |... | obj |
+ ------------------------------------------- +
| ----- Overflow ---> |

When the kernel code accesses the data structure in the overflow obj, oops is generated.


Iv. SLUB overflow Method

The ultimate goal of kernel elevation is to trigger a kernel bug and then control the kernel path to userspace.
Set the shellcode. So our general direction is in second obj, if a function pointer can be dirty data
Overwrite shellcode in userspace, and the user can call this function pointer, the permission will be improved.
. Another issue to be addressed is how to ensure that the obj allocated with kmalloc in the bug code matches us
The obj of the function pointer to be overwritten is adjacent. Because the two can only be adjacent to each other, the function can be overwritten with overflow data.
Pointer.

Assume that a data structure has been found in the kernel to meet the preceding requirements.
If the two obj values are adjacent, the pointer overwrite can be completed. We know that one feature of slab is
When the obj in the slab structure is used up, the kernel will re-allocate an slab.
Is adjacent:

Kmalloc ()->__ kmalloc ()->__ do_kmalloc ()->__ cache_alloc ()-> ____ cache_alloc ()
-> Cache_alloc_refill ()-> cache_grow ()-> cache_init_objs ()
-- Code -------------------------------------------------------------------------
Static void cache_init_objs (struct kmem_cache * cachu,
Struct slab * slabp, unsigned long ctor_flags)
{
For (I = 0; I <cache-> num; I ++ ){
Void * objp = index_to_obj (cache, slabc, I );
Slab_bufctl (slabp) [I] = I + 1;
}
Slab_bufctl (slabp) [I-1] = BUFCTL_END;
Slabp-> free = 0;
}
-------------------------------------------------------------------------------

As mentioned in the slab structure, there is a kmem_bufctl_t array. Each element in it points to the next idle obj.
. When initializing a new slab, each kmem_bufctl_t element points to the next adjacent
So when the kernel re-allocates an slab Structure, the obj allocated from this new slab is adjacent.

So does SLUB satisfy this feature? After carefully reading slub's code, I found that it also satisfies this feature:

Kmalloc ()-> slab_alloc ()->__ slab_alloc ()-> new_slab ():
-- Code -------------------------------------------------------------------------
Static struct page * new_slab (struct kmem_cache * s, gfp_t flags, int node)
{
Last = start;
For_each_object (p, s, start, page-> objects ){
Setup_object (s, page, last );
Set_freepointer (s, last, p );
Last = p;
}
Setup_object (s, page, last );
Set_freepointer (s, last, NULL );
}
# Define for_each_object (_ p, _ s, _ addr, _ objects)
For (_ p = (_ addr); _ p <(_ addr) + (_ objects) * (_ s)-> size;
_ P + = (_ s)-> size)
-------------------------------------------------------------------------------

This code traverses all obj in a page for initialization:

-- Code -------------------------------------------------------------------------
Static inline void set_freepointer (struct kmem_cache * s, void * object, void * fp)
{
* (Void **) (object + s-> offset) = fp;
}
-------------------------------------------------------------------------------

S-> offset stores the next idle obj offset in slab. The set_freepointer function returns an obj
The next idle pointer to the next obj. So slub also meets this feature.

Now we only need to find a method in the user space to continuously consume the slab with a size of 96, when the existing slab is used up
The obj in the new slab is consecutive adjacent. How to consume slab, we can still use the shmget System
In the struct shmid_kernel structure used by the Unified Call for processing, there is the function pointer we want to override!

Ipc/shm. c:
-- Code -------------------------------------------------------------------------
Sys_shmget-> ipcget-> ipcget_new-> newseg:
Static int newseg (struct ipc_namespace * ns, struct ipc_params * params)
{
Struct shmid_kernel * shp;

Shp = ipc_rcu_alloc (sizeof (* shp ));
Shp-> shm_file = file;
}
Void * ipc_rcu_alloc (int size)
{
Out = kmalloc (HDRLEN_KMALLOC + size, GFP_KERNEL );
}
-------------------------------------------------------------------------------

Therefore, as long as shmget is continuously called in the user space, slab with a size of 96 will be continuously consumed in the kernel. In the example
The code is allocated with 80 bytes, which will be allocated in the 96-size slab. Note the following:

-- Code -------------------------------------------------------------------------
Out = kmalloc (HDRLEN_KMALLOC + size, GFP_KERNEL );
-------------------------------------------------------------------------------

The front part of the obj allocated by shmget has an eight-byte space. Therefore, the shmid_kernel allocated by shmget is used.
The structure will be as follows:

| ------ 96 -------------------- | --------------- 96 ------------ |
+ --------------------------------------------------------------- +
| HDRLEN_KMALLOC | shmid_kernel |
+ --------------------------------------------------------------- +

HDRLEN_KMALLOC bytes need to be skipped for future overwriting.

The information about slab in the kernel can be obtained in/proc/slabinfo:

-------------------------------------------------------------------------------
[Wzt @ localhost exp] $ cat/proc/slabinfo | grep kmalloc-96
Kmalloc-96 922 924 96 42 1: tunables 0 0: slabdata 22 22 0
-------------------------------------------------------------------------------

922 indicates the number of active obj, and 924 indicates the number of obj in all slab. Therefore, we can solve this problem in user space.
Analyze this file to obtain the remaining obj count in the current system:

-- Code -----------------------------------------------------------------

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.