Cuda for GPU High Performance Computing-Chapter 1

Source: Internet
Author: User
1. GPU is superior to CPU in terms of processing capability and storage bandwidth. This is because the GPU chip has more area (that is, more transistors) for computing and storage, instead of control (complex control unit and cache ). 2. command-level parallel --> thread-level parallel --> processor-level parallel --> node-Level Parallel 3. command-level parallel methods: excessive execution, out-of-order execution, ultra-flow, ultra-long command words, SIMD, and branch prediction. Ultra-long script can reduce memory access. 4. ultra-long pipelines will cause efficiency problems and require more precise prediction functions and a larger cache. 5. New challenges for CPU multi-core architecture: Memory barriers, Chip, board-level, system-level balanced design and portability issues. (OpenMP, TBB) 6. the CPU and GPU are generally connected through the North Bridge through the AGP or PCI-E bus. It has independent external memory. 7. GPU is a lightweight thread. The switching cost is small. 8. Two mainstream CPUs ~ Eight cores, each with three cores ~ Six pipelines. 9. Cuda uses coarse-grained task parallelism and data-Level Parallelism among multiple stream processors, as well as fine-grained data parallelism within the stream processor. 10. the working frequency of the video memory is higher than that of the memory, because GDDR is directly welded to the PCB, and the memory is connected to the motherboard through the slot, and the signal integrity is relatively worse. 11. There are multiple memory control units in the video memory, and the memory controller usually uses two-channel or three-channel technology. The GPU can access more storage particles than the CPU at the same time. 12. There is no Complex cache system or replacement mechanism in the GPU. The GPU cache is read-only, so consistency issues do not need to be considered. 13. The goal of GPU cache is not to reduce the latency of memory access, but to save the memory bandwidth. 14. The goal of GPU is to deal with a large number of threads for High-throughput data parallel computing. It is suitable for large-scale data parallel tasks with high computing density and simple logic branches.

Cuda for GPU High Performance Computing-Chapter 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.