GPU Storage Model

Source: Internet
Author: User

1. Global memory

In cuda, the general data is copied to the memory of the video card, which is called global memory. These memories do not have cache, And the latency required for accessing global memory is very long, usually hundreds of cycles. Because global memory does not have a cache, a large number of threads must be used to avoid latency. Assuming that a large number of threads are executed simultaneously, when a thread reads the memory and starts waiting for the results, the GPU can immediately switch to the next thread and read the next memory location. Therefore, when there are enough threads, we can completely hide the huge latency of global memory.

2. Memory Access Mode

The memory on the video card is dram, so the most efficient access method is sequential access. When a thread is waiting for memory data, the GPU switches to the next thread. That is to say, the actual execution order is similar to Thread 0-> thread 1-> thread 2->...

Theoretically, up to 256 threads can only hide latency of 256 cycles. However, when the GPU accesses global memory, the latency may exceed 500 cycles.

3. Block

The thread in a block has a shared memory and can also be synchronized. The thread between different blocks does not work.

Threads in a block can have shared memory or be synchronized.

The variable declared by _ shared _ indicates that this is shared memory and the memory shared by every thread in a block. It uses memory on the GPU, so the access speed is quite fast and there is no need to worry about latency.

_ Syncthreads () is a Cuda internal function, indicating that all threads in the block must be synchronized to this point before execution can continue.

4. Architecture (g80 prevails in the following cases)

The g80 multi-core stream processor (streaming multiprocessor sm) has only one. Each SM contains eight stream processors (streaming processor SP ).

In SM, warp is the unit of thread scheduling, and each warp contains 32 threads.

In g80, each Sm can reside in eight blocks. Each Sm can reside in up to 768 threads, so each Sm can reside at most 768/32 = 24 warp.

The size of registers in each SM is 8 KB, and the size of shared memory in each SM is 16 KB. Shared Memory is allocated to blocks for use.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.