1. GPU is superior to CPU in terms of processing capability and storage bandwidth. This is because the GPU chip has more area (that is, more transistors) for computing and storage, instead of control (complex control unit and cache ). 2. command-level parallel --> thread-level parallel --> processor-level parallel --> node-Level Parallel 3. command-level parallel methods: excessive execution, out-of-order execution, ultra-flow, ultra-long command words, SIMD, and branch prediction. Ultra-long script can reduce memory access. 4. ultra-long pipelines will cause efficiency problems and require more precise prediction functions and a larger cache. 5. New challenges for CPU multi-core architecture: Memory barriers, Chip, board-level, system-level balanced design and portability issues. (OpenMP, TBB) 6. the CPU and GPU are generally connected through the North Bridge through the AGP or PCI-E bus. It has independent external memory. 7. GPU is a lightweight thread. The switching cost is small. 8. Two mainstream CPUs ~ Eight cores, each with three cores ~ Six pipelines. 9. Cuda uses coarse-grained task parallelism and data-Level Parallelism among multiple stream processors, as well as fine-grained data parallelism within the stream processor. 10. the working frequency of the video memory is higher than that of the memory, because GDDR is directly welded to the PCB, and the memory is connected to the motherboard through the slot, and the signal integrity is relatively worse. 11. There are multiple memory control units in the video memory, and the memory controller usually uses two-channel or three-channel technology. The GPU can access more storage particles than the CPU at the same time. 12. There is no Complex cache system or replacement mechanism in the GPU. The GPU cache is read-only, so consistency issues do not need to be considered. 13. The goal of GPU cache is not to reduce the latency of memory access, but to save the memory bandwidth. 14. The goal of GPU is to deal with a large number of threads for High-throughput data parallel computing. It is suitable for large-scale data parallel tasks with high computing density and simple logic branches.
Cuda for GPU High Performance Computing-Chapter 1