Parallel computing is the parallel collaboration of multiple processes to accomplish specific tasks. Now we assume a parallel system that contains P processors, one process per processor, we can use the characters "0", "1", ..., "p-1" to refer to them, or for clarity, we use Pi to refer to them, I represents the process number of a process, the process can pass messages to each other, The so-called message refers to a data structure.
In parallel programming, we define a process with program code, and each process will run the process defined by the program code, i.e. the code must be generic. Next we use the example of parallel matrix computation to illustrate.
Matrix calculation
There are many types of matrix calculation problems, such as:
Solving a linear algebraic equation Group Ax = b
Linear least squares problem given B in R^m, for x in R^n,minimize | | ax-b| | ^2
Matrix Eigenvalue issues Ax =λx
Matrix singular value decomposition A = u∑v^t
We can use the world-famous mathematical induction to prove the correctness of such a chunk, but this is not the focus of this article, not to repeat.
How to do parallel programming: Starting with parallel matrix operations