Install Openmpi
because is the experiment, also does not carry on the multiple machine configuration, only installs in the virtual machine. The configuration of multiple machines can refer to this article
The easiest way, apt to install
sudo apt-get install Libcr-dev mpich2 mpich2-doc
Test
hello.c
/* C Example
/* #include <mpi.h>
#include <stdio.h>
int main (int argc, char* argv[])
{
int rank, size;
Mpi_init (&ARGC, &argv); /* Starts MPI *
/Mpi_comm_rank (Mpi_comm_world, &rank); /* Get current Process ID *
/mpi_comm_size (Mpi_comm_world, &size); /* Get number of processes
/printf ("Hello World from Process%d%d\n", rank, size);
Mpi_finalize ();
return 0;
}
Compile run and display results
MPICC mpi_hello.c-o Hello
mpirun-np 2./hello
Hello World out process 0 of 2
Hello World from Process 1 of 2
Normal appearance results indicate no problem,
Look at the Openmpi version.
MPIRun (Open MPI) 1.6.5 the
bugs to http://www.open-mpi.org/community/help/
MPI Computing Matrix multiplication
Through the opemmpi accelerated matrix multiplication operation. Using master-slave mode, number No. 0 is master, others are child (or worker,as you wish).
Basic ideas
Two matrices a,b multiplication, the number of rows I multiplied by B's Column J is the numeric value of the new matrix (i,j) coordinate. The final matrix of a (MN) B (NK) is M*k, m=n=k=1000 in the experiment, and I do not have a clear distinction between MNK, all defined by Matrix_size.
The simplest idea is for each worker assignment (matrix_size/(numprocess-1)), and then if there is one left, then the remainder corresponds to the worker. For example, matrix_size=10,numprocess=4 the actual worker has 3, each person is divided into 3 lines, the last line to the ID is 1. Can be very simple to use the circular class allocation. Finally master collects all the results and assembles them in order.
Each worker's job is to receive a row from master, and a B-matrix operation, to produce a new line of results, and then send it back to master
Code
Added a lot of annotations to explain, the description of the function is explained in the next section.
#include <mpi.h> #include <stdio.h> #define MATRIX_SIZE #define FROM_MASTER 1//The types here can differentiate the types of messages in order to differentiate between worker
The results sent #define FROM_CHILD 2 #define MASTER 0 mpi_status Status;
int myid,numprocess;
Final saved results int ans [matrix_size*matrix_size];
int a[matrix_size*matrix_size],b[matrix_size*matrix_size];
Read the file, note that read the file to be placed in master, otherwise it will read two times, error void ReadFile () {file* fina,*finb;
Fina=fopen ("A.txt", "R");
int i;
for (i = 0; i < matrix_size*matrix_size ++i) {fscanf (Fina, "%d", &a[i]);
} fclose (FINA);
Finb=fopen ("B.txt", "R");
for (i=0;i<matrix_size*matrix_size;i++) fscanf (finb, "%d", &b[i]);
Fclose (FINB);
printf ("Read file ok\n");
int master () {int workid,dest,i,j;
printf ("Numprocess%d\n", numprocess); Send each worker a B matrix past for (i=0;i<numprocess-1;i++) {//send b matrices mpi_send (&b,matrix_size*matrix_size,mpi_i
Nt,i+1,from_master,mpi_comm_world); //Start assigning tasks to each worker, modulo for (i = 0; i < matrix_size; i++) {//attEntion:num of workers is Numprocess-1 workid=i% (numprocess-1) +1;
Send single line in A mpi_send (&a[i*matrix_size],matrix_size,mpi_int,workid,from_master,mpi_comm_world);
}//Waiting for data to be sent from the worker int templine[matrix_size];
for (i = 0; i < matrix_size*matrix_size; i++) {ans[i]=0;
for (i = 0; i < matrix_size ++i) {int myprocess=i% (numprocess-1) +1;
printf ("Master is waiting%d\n", myprocess); Receive every line from every process mpi_recv (&templine,matrix_size,mpi_int,myprocess,from_child,mpi_comm_
World,&status);
Sent over is a calculated row of data, directly assembled into the ANS in the line for (j=0;j<matrix_size;j++) {ans[matrix_size*i+j]=templine[j];
printf ("Master gets%d\n", i);
for (i=0;i<matrix_size*matrix_size;i++) {printf ("%d", ans[i]);
if (i%matrix_size== (matrix_size-1)) printf ("\ n");
printf ("The Master is out\n");
int worker () {int ma[matrix_size],mb[matrix_size*matrix_size],mc[matrix_size]; int I,j,bi;
MPI_RECV (&mb,matrix_size*matrix_size,mpi_int,master,from_master,mpi_comm_world,&status); Receives a row for (i=0;i<matrix_size/(numprocess-1); i++) {mpi_recv (&ma,matrix_size,mpi_int,master,from_) from MASTER
Master,mpi_comm_world,&status);
Matrix multiplication, a row and B matrix multiplication for (bi=0;bi<matrix_size;bi++) {mc[bi]=0;
for (j=0;j<matrix_size;j++) {mc[bi]+=ma[j]*mb[bi*matrix_size+j];
} mpi_send (&mc,matrix_size,mpi_int,master,from_child,mpi_comm_world); //If it is in the remainder range, you need to compute one row if (matrix_size% (numprocess-1)!=0) {if (myid<= (matrix_size% (numprocess-1))) {M
PI_RECV (&ma,matrix_size,mpi_int,master,from_master,mpi_comm_world,&status);
for (bi=0;bi<matrix_size;bi++) {mc[bi]=0;
for (j=0;j<matrix_size;j++) {mc[bi]+=ma[j]*mb[bi*matrix_size+j];
} mpi_send (&mc,matrix_size,mpi_int,master,from_child,mpi_comm_world); } printf ("The worker%d is out\n", myID);
int main (int argc, char **argv) {mpi_init (&ARGC, &ARGV);
Mpi_comm_rank (Mpi_comm_world,&myid);
Mpi_comm_size (mpi_comm_world,&numprocess);
if (myid==master) {readFile ();
Master ();
} if (Myid>master) {worker ();
} mpi_finalize ();
return 0;
}
Openmpi Simple Function Introduction
several functions used in the experiment are explained.
MPI provides programmers with a parallel environment library, the programmer by calling the MPI Library program to achieve the parallel purpose of the programmer, you can use only 6 of the most basic functions can write a complete MPI program to solve a lot of problems. These 6 basic functions, including starting and ending MPI environments, identifying processes, and sending and receiving messages:
In theory, all the communication functions of MPI can be implemented with its six basic calls:
- Mpi_init Start MPI Environment
- Mpi_comm_size determine the number of processes
- Mpi_comm_rank to determine its own process identifier
- Mpi_send Send a message
- MPI_RECV receive a message
- Mpi_finalize End MPI Environment
Initialization and end
MPI initialization: Enter the MPI environment through the MPI_INIT function and complete all initialization work.
int mpi_init (int *argc, char * * * argv)
MPI end: Exits from the MPI environment via the Mpi_finalize function.
Get the number of the process
Call the Mpi_comm_rank function to get the number of the current process in the specified communication domain, distinguishing itself from other programs.
int Mpi_comm_rank (mpi_comm Comm, int *rank)
Gets the number of processes in the specified communication domain
Call the Mpi_comm_size function to get the number of processes in the specified communication domain and determine the proportions of the task itself.
int Mpi_comm_size (mpi_comm Comm, int *size)
MPI Message
a message is like a letter
The content of the message, the content of the letter, becomes the message buffer in the MPI (Messages buffer)
The recipient of the message is the address of the sender, which becomes the message encapsulation in MPI (Messages envelop)
In MPI, message buffering consists of ternary group < starting address, number of data, data type > identification
Message envelope consists of ternary < source/target process, message label, communication domain > identity
Message sending
The Mpi_send function is used to send a message to the target process.
int mpi_send (void *buf, int count, mpi_datatype dataytpe, int dest, int tag, Mpi_comm Comm)
BUF is the pointer to send data, such as an array of a, can be directly &a,count is the length of the data, datatype will be converted to MPI type. Dest is the ID of the worker. Tag can distinguish a message type by a different type, such as whether it is sent by master or by a worker.
Message reception
The MPI_RECV function is used to receive a message from a specified process
int mpi_recv (void *buf, int count, mpi_datatype datatyepe,int source, int tag, Mpi_comm Comm, Mpi_status *status)
Compiling and executing
Generate execution File data
Mpicc-o ProgramName programname.c
A MPI parallel program consists of several concurrent processes that can be the same or different. MPI only supports static process creation, that is, each process must be enlisted in the MPI environment before execution, and they must be started together. It is common to start an executable MPI program by using the command line. The startup method is determined by the specific implementation. For example, in the Mpich implementation, you can start the same executable on a stand-alone machine at the same time by using the following command lines:
where n is the number of simultaneous processes running, ProgramName is the program name of the executable MPI program.