Summary of "parallel computing"
. |
. |
Zheng Dongjian |
zhengdongjian@tju.edu.cn |
Update log
numbering |
Update Time |
content |
1 |
201606062144 |
The OPENMP GCC compilation option is-FOPENMP |
2 |
201606062144 |
Virtual process can not avoid deadlock, system understanding error |
first, Parallel introduction
Domain decompositionExploded object for:
DataFirst, determine how the data is partitioned into each processor and then determine what the processors are doing. Example: Max value
Task (function) DecompositionExploded object for:
Task(function) First divides the task into each processor and then determines the data to be processed by each processor
second, parallel hardware performance
Flynn Flynn CategoriesSisd:single instruction Stream Single Data stream SIMD MISD MIMD
The structure model of parallel computerPVP (Parellel vector Processor), parallel vector processor. Feature: Instead of caching, use a large number of vector registers and instruction caches. Only a program that fully considers the characteristics of vector processing can obtain better performance.
SMP(symmetric multiprocessor), parallel multiprocessor, shared memory. Scalability is limited. MPP (massively Parellel Processor), massively parallel processor. Only micro-cores (each node has no independent operating system etc), high communication bandwidth, low latency internetwork, distributed storage. Asynchronous MIMD. Cluster, cluster. Distributed storage, each node is a complete computer. Small investment risk, flexible structure, cost-effective, full use of decentralized computing resources, scalability is good. Problem: Communication performance.
Memory Access Model
UMA(Uniform memory access), uniform storage access.
The physical memory is shared evenly (the time of the visit) and the private cache peripherals are also shared with NUMA (nonuniform memory Access), non-uniform storage access.
The shared memory is distributed across all processors in different times: local (LM) and intra-group sharing (CSM) is faster, and the field and global share (GSM) is slower to share with private cache peripherals
NORMA(No-remote memory Access), non-Remote Storage access
Data exchange between all memory private nodes via message transfer, network, ring, hypercube, cubic ring
Multi-core technologyMoore's Law (18 months per unit area transistor number doubled) power Wall (the higher the performance, the greater the power required to improve performance)
Memory Wall, with a higher performance than CPU multicore (Dual core) and Hyper-threading (Hyper thread, HT) dual core is the true dual-processor, no resource conflicts, each thread has its own cache, register and operator over-line Chengti high performance >1/ 3, dual core equivalent to 2xNHT 2 \times NHT performance indicator execution time Elapsed times, tn=t compute +t parallel overhead +t communication T_{n} = t_{calculation} + t_{parallel cost} + t_{traffic} floating-point arithmetic Flop (floating-point operation) instruction number MIPS (Million instructions Per sencond) compute/communication than Tcomptcomm t_{comp}\over T_{comm} acceleration ratio S (n) =TSTP S (n) = \frac{t_s}{t_p} efficiency e=tstpxn E = \frac{t_s}{t_p\times n} price Cost=tsns (n) =tse cost = \frac{t_s n}{s (n)} = \frac {t_s} E
Number of processors P p, problem size W W (serial component Ws w_s), parallelization section