About cache line

Source: Internet
Author: User
An L1 data cache is equivalent to a small memory. Assume that it is 16 kb in size and it interacts with the general physical memory. It interacts with the memory. Generally, it transfers 16 bytes (32 bytes) at a time, that is, cache bytes 0-15 Write/read physical memory at a time, and bytes write/read physical memory at a time between 16-31 bytes. 32-47 ...... these transferred bytes are called cache line. ------------------------------------------------------------ In addition, the locations where the cache is written to the physical memory are not arbitrary. Assuming that the memory is 64 K, the value of the cache address 0 can only interact with the address 0, 16; the value of cache address 1 can only interact with physical memory addresses 1, 16 k + 1, and 32 k + 1...... The 16k-1 value of the cache address can only interact with the physical memory address 6k-1, 16 K + 16k-1, and 32 K + 16 k-1, which indicates two points: (1) assume that a field of object A is 16 bytes long. If it is placed in the physical address 0-15, it interacts with the first cache line of the cache. If it is placed in the physical address 8-23, if the CPU needs to access this field, it must read both the first and second cache lines to obtain the information of this field. Obviously, this is slow, so the general field needs cache line alignment, 16 bytes are aligned here. (2) about color, some fields of an object are frequently accessed. Assume that a cache (this cache refers to the slab cache, not the L1 data cache of the CPU mentioned above) occupies 5 pages, that is, 20 kb. assume that the object size is 32 bytes, and the first 16 bytes are frequently accessed. Assume that object a starts from the physical address 0, Object C starts from 31, and object B starts from the physical address 16 K, then object, the first 16 bytes of object B interact with the first cache line, and the last 16 bytes interact with the second cache line. The first 16 bytes of Object C interacts with the second cache line. We assume that the kernel accesses B after a, and then accesses A, which is staggered. the first 16 bytes are all 50 times, and the last 16 are 10 times. C is also. In this way, the first cache line needs to interact 100 times, and the second cache line needs 20 times, totaling 120 times. If we move object B 16 bytes backward, that is, the first 16 bytes of object B interact with the second cache line, and the last 16 bytes interact with 3rd. The first one is two times, because only two times at the beginning and end are used to interact with the memory, and the rest can be written in L1 datacache each time. The number of 2nd cache lines is about 20 (only the cache must be read and written later), the number of 3rd cache lines is 20, and the total number of 3 lines is 41, you may wish to simulate it carefully. Therefore, misplacement can reduce the number of cache interactions and improve the CPU processing speed. This dislocation (that is, the above 16 bytes) is color.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.