Processor Cache and TLB control

Source: Internet
Author: User
Processor Cache and TLB control

*flush_tlb_all and Flush_cache_all brush out the entire tlb/to tell the cache.
*FLUSH_TLB_MM and flush_cache_mm brush out all of the terms address space mm tlb/cache entry.
*flush_tlb_range and Flush_cache_ragee address range vma->vm_mm The virtual address between start and end all tlb/tell cache entries
*flush_tlb_page and Flush_cache_page brush out all the tlb/cache entries in the range of virtual address thieves.
*update_mmu_cache in the Process page after the expiration of the OH that.
The kernel does not differentiate between data and instructions telling the cache. If you need to differentiate, processor-specific code can determine whether to tell the cache to include instructions or data, based on the Vm_area_struct->flags vm_exec flag.
Flush_cache_ and Flush_tlb_ functions often appear in pairs.
The order of operations is: Brush out the cache, manipulate the memory, and brush out the TLB. This order is important, for two reasons.
* If the order is reversed, then another CPU in the multiprocessor system may get the wrong information from the process's page table after the TLB is brushed out and the correct information is provided.
* Some architectures need to rely on the "virtual-> physical" conversion rule in the TLB when brushing out the cache. FLUSH_TLB_MM must be executed after flush_cache_mm to ensure this. Some control functions are explicitly applied to data caching or instruction caching
* If the cache contains several virtual address different items pointing to the same page in memory, the so-called alias problem may occur and flush_dcache_page helps to prevent the problem.
* When the kernel writes data to the kernel memory location, which is then executed as code, you need to invoke flush_icache_range. A standard instance of this scenario is when loading modules into the kernel. Binary data is first copied into physical memory and then executed.
*flush_icache_user_range is a special function for the ptrace mechanism. Summary

 memory management is processed at two levels after the kernel is in normal operation.
The partner system is responsible for the management of the physical page frames, while the slab allocator handles the allocation of small chunks of memory and provides the kernel equivalent of the user Layer malloc function group.
The partner system splits and expands in a merged contiguous block of memory that is composed of multiple pages. The slab allocator is implemented on top of the partner system.
Not only does he allow small chunks of memory to be allocated for any purpose, but it can also create specific caches for frequently used data structures.
The initialization of memory management is challenging, because the data structure used by the subsystem itself requires memory and must be allocated from somewhere. 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.