The construction of MPI and Mpi4py has been described in a previous article, which introduces some basic usage.
Mpi4py's HelloWorld
from Import MPI Print ("helloWorld")
Mpiexec-n 5 Python3 x.py
2. Point-to-point communication
Because the mpi4py midpoint to the point of communication Send statement in the data volume is small when the sending data is copied to the buffer, non-blocking operation, but in the larger data volume is blocked operation, thus the following:
When sending smaller data:
Import= = = # Point-to-point communication data_send = [comm_ Rank]*5comm.send (data_send,dest= (comm_rank+1)%=comm.recv (source= (comm_rank-1)%comm_size) Print("my rank is%d, and ireceived:" % comm_rank) Print(DATA_RECV)
When the amount of data is large, such as sending:
# Point-to-point communicationdata_send = [comm_rank]*1000000
This will cause a deadlock between the various processes. (Because each process is blocking execution at this time, each process waits for another process to send data)
After the modified code, all processes are executed sequentially, and 0 processes are sent to the 2 and then sent to the next:
Importmpi4py. MPI as MPI Comm=Mpi.comm_worldcomm_rank=Comm. Get_rank () comm_size=Comm. Get_size () Data_send= [comm_rank]*1000000ifComm_rank = =0:comm.send (data_send, dest= (comm_rank+1)%comm_size)ifComm_rank >0:data_recv= Comm.recv (source= (comm_rank-1)%comm_size) comm.send (data_send, dest= (comm_rank+1)%comm_size)ifComm_rank = =0:data_recv= Comm.recv (source= (comm_rank-1)%comm_size)Print("My rank is%d, and ireceived:"%Comm_rank)Print(DATA_RECV)
3 Group Communication
3.1 broadcast bcast
A process sends data to all processes
Import= = = if Comm_rank = = 0: = if Else None, root=0)print('rank%d, got:' % (comm_rank)) Print (DAT)
The sender will also receive this part of the data, of course, the sender of this data is not accepted by the network transmission, but in its own memory space is there.
3.2 Spread scatter
Import= = = if Comm_rank = = 0: = Range (comm_size) Else: == comm.scatter (data, root=0)print(' rank%d, got:' % comm_rank)print(local_data)
3.3 Collection gather
Collect all the data back
Importmpi4py. MPI as MPI Comm=Mpi.comm_worldcomm_rank=Comm. Get_rank () comm_size=Comm. Get_size ()ifComm_rank = =0:data=Range (comm_size)Else: Data=Nonelocal_data= Comm.scatter (data, root=0) Local_data= Local_data * 2Print('rank%d, got and do:'%Comm_rank)Print(local_data) Combine_data= Comm.gather (local_data,root=0)ifComm_rank = =0:Print("root recv {0}". Format (Combine_data))
3.4 statute reduce
Importmpi4py. MPI as MPI Comm=Mpi.comm_worldcomm_rank=Comm. Get_rank () comm_size=Comm. Get_size ()ifComm_rank = =0:data=Range (comm_size)Else: Data=Nonelocal_data= Comm.scatter (data, root=0) Local_data= Local_data * 2Print('rank%d, got and do:'%Comm_rank)Print(local_data) all_sum= Comm.reduce (Local_data, root=0,op=MPI. SUM)ifComm_rank = =0:Print('sum is:%d'% all_sum)
Operations such as SUM MAX MIN are aggregated into the root process after data collection is performed once in each process and then the total operation is performed.
op=MPI. SUM
op=MPI. MAX
op=MPI. MIN
Reference article:
"python multi-core programming mpi4py Practice "
49031845
Mpi4py of Python High performance parallel computing