Memory wall/spatial locality/temporal locality/memory latency/

Source: Internet
Author: User

 

Generally speaking, memory bus bandwidth has not seen the same improvement as CPU performance (an observation sometimes referred to as the memory wall), and with multi-coreand custom-core systems, the available bandwidth is shared between all cores. this makes preservation of memory bandwidth one of the most important tasks in achieving top performance.

 

Spatial locality

Spatial locality refers to the desirable property of accessing close memory locations. Poor spatial locality is penalized in a number of ways:

Accessing data very sparsely will have the negative side effect of transferring unused data over the memory bus (since data travels in chunks ). this raises the memory bandwidth requirements, and in practice imposes a limit on the application performance and scalability.

Unused data will occupy the caches, which reduces the specified tive cache size. This causes more frequent evictions and more round trips to memory.

Unused data will reduce the likelihood of encountering more than one useful piece of data in a mapped cache line.

 

Temporal locality

Temporal locality relates to reuse of data. Reusing data while it is still in the cache avoids sustaining memory fetch stils and generally CES the memory bus load.

 

Memory latency

The times it takes to initiate a memory fetch is referred to as memory latency. During this time, the CPU is stalled. The latency penalty is on the order of 100 clock cycles.

Caches were authorized Ted to hide such problems, by serving a limited amount of data from a small but fast memory. This is valid if the data set can be coerced to fit in the cache.

A different technique is called prefetching, where data transfer is initiated explicitly or automatically ahead of when the data is needed. Hopefully, the data will have reached the cache by the time it is needed.

 

Avoiding cache pollution

If a dataset has a footprint larger than the available cache, and there is no practical way to reorganize the access patterns to improve reuse, there is no benefit from storing that data in the cache in the first place. some CPUs have special instructions for bypassing caches, exactly for this purpose.

 

 

 

From Wiki

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.