Get current time
After inserting the header file provided by MPI, you can get a function to get the time.
Double Mpi_wtime (void) Obtains the current time, and the precision of the timing is obtained by double mpi_wtick (void)
As a comparison, in C + +, the time.h is inserted, the current time is obtained by clock_t clock (void), and the precision of the timing is defined by the constant clocks_per_sec.
Dot to point communication functions
Inter-process communication needs to be done through a communicator. The MPI environment automatically creates two communicator at initialization time, one called Mpi_comm_world, which contains all the processes in the program, and the other is called Mpi_comm_self, which is made up of each process and contains its own communicator alone. The MPI system provides a special process number Mpi_proc_null, which represents an empty process (a nonexistent process), communicates with the mpi_proc_null equivalent to an empty operation, and has no effect on the operation of the program.
Use Mpi_barrier (Communicator) to complete synchronization
Use Mpi_send (message, size, data_type, dest_id, tag, Communicator) to encapsulate the data message as a true message structure, sending data to a process with a process number of dest_id. Whether you want to buffer the message first depends on the size of the default buffer.
Using Mpi_bsend (message_data, size, data_type, dest_id, tag, Communicator) to send data, you need to pre-register a buffer and call Mpi_buffer_attach (buffer , buf_size) for use in MPI environments
Use Mpi_buffer_attach (buffer, size) to submit buffer buffers to the MPI environment, where buffer is a block of memory allocated through malloc.
Use Mpi_pack_size (size, data_type, Communicator, &pack_size) to get the buffer size required to wrap a particular type of data (not yet counted into the head, so the true buffer size buf_size = MPI _bsend_overhead + pack_size, if there are multiple data sent, then buf_size also overlay).
Use MPI_RECV (message, size, data_type, src_id, tag, Communicator, status) to receive data, parse the data that has reached the receive buffer into the message array, and only when all the data is resolved , the function does not return.
Tips: In addition to the above two modes of transmission, there are ready communication mpi_rsend () and synchronous communication mpi_ssend (), the parameters are consistent, the difference between functions is that if the receive action is guaranteed to start before the Send action (can take advantage of mpi_ The barrier function does this), then you can use Mpi_rsend () to improve efficiency, and if you need to ensure that the send action can return after a receive action occurs, use Mpi_ssend ()
tips: These are all ways to block communication, that is, the process may block when calling these functions. You can use other communication modes, or use multithreading to change this.
Collection Communication
Mpi_bcast broadcast, making the data have P copies
Mpi_scatter distributed, each copy of the data only once
Mpi_gather collection, each copy of the data only once
Mpi_reduce
If you need to write again later.
Data types and pre-defined quantities
Data types used as parameters Mpi_int, mpi_double, Mpi_char, Mpi_status
Pre-defined amount Mpi_staturs_ignore, Mpi_any_source, Mpi_any_tag
Initialize and end
Use Mpi_init (&ARGC, &ARGV) to initialize the MPI environment, which may be the initialization of some global variables.
Use Mpi_comm_rank (Communicator, &myid) to get the process number that the current process has in the communicator.
Use Mpi_comm_size (Communicator, &numprocs) to get the number of processes contained in the Communicator.
Use Mpi_finalize () to end the parallel programming environment. Then we can create a new MPI programming environment.
The common interface of MPI programming