Message Passing Interface (MPI) is a standard Message Passing Interface that can be used for parallel computing. MPICH is generally used to implement MPI. The following describes how to build an MPI Environment in Windows XP VC6 to compile the MPI program.
I. Preparations
1.1 install the MPI SDK--MPICH2.
Mpich2-1.4.1p1-win-ia32 installer:
Http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4.1p1/mpich2-1.4.1p1-win-ia32.msi
This document takes installation in the C: \ Program Files \ MPICH2 directory as an example.
Test the installed MPICH2
Register a user before testing. The procedure is as follows: "Start" button --> all programs --> MPICH2 --> wmpiregister.exe. Enter the user name and password. Note: the user name must be a valid operating system administrator account. The password corresponds to the system logon password. :
Next, choose start --> all programs --> MPICH2 --> wmpiexec.exe;
Select Application as c: \ program files \ mpich2 \ examples \ cpi.exe (a built-in example program for calculating the circumference rate ). 2 indicates that two processes are used for collaborative completion. Select "run in separate windw. Click Excute.
Then, enter the number of intervals in the console window, and enter a number (,) to view the circumference rate. For example:
Note that the string in Show Comman can be entered in the console to obtain similar results.
1.2 add MPICH2 to VC6
Add the include and lib of mpi to VC6.0 first. In the VC6.0 program menu, choose "Tools"> "Optains"> "Directories" and add them, as shown in:
II. The first MPI Program
Create a Win32 Console Application project and add the following code:
View plain
# Include <mpi. h>
# Include <stdlib. h>
# Include <stdio. h>
# Include <conio. h>
# Pragma comment (lib, "mpi. lib ")
Int main (int argc, char * argv [])
{
Int myid, numprocs;
Int namelen;
Char processor_name [MPI_MAX_PROCESSOR_NAME];
MPI_Init (& argc, & argv );
// Use MPI_Comm_rank to obtain the rank of the process. The rank value is an integer between 0 and p-1-level, which is equivalent to the ID of the process.
MPI_Comm_rank (MPI_COMM_WORLD, & myid );
// Use MPI_Comm_size to obtain the number of processes int MPI_Comm_size (MPI_Comm, int * size );
MPI_Comm_size (MPI_COMM_WORLD, & numprocs );
MPI_Get_processor_name (processor_name, & namelen );
Printf ("Hello World! By MoreWindows \ nProcess % d of % d on % s \ n ", myid, numprocs, processor_name );
MPI_Finalize ();
If (myid = 1)
{
Printf ("\ nPress a key and exit. \ n ");
Getch ();
}
Return 0;
}
Compiling and running, the first MPI program was born. The following is the running result on my computer:
Iii. Run MPI collaboratively on multiple computers
Only multiple computer clusters can give full play to the power of parallel computing. Of course, with the help of MPI, this work will become very simple. The following uses cpi.exe mentioned before the two pcks and the same operation as an example.
MPI running conditions on more than 3.1 computers
1. The machine participating in parallel computing must register at least one identical account. For example, one mpi account must be registered on both PCs.
2. Create a JOB directory. Each machine must have a directory with the same path to store the JOB to be executed (that is, the exe program). For example, the Created directory is d: \ MPI \, copy the executable files used for testing to the directory of the two PCs.
3.2 online Command Execution explanation
You can enter "C: \ Program Files \ MPICH2 \ bin \ mpiexec.exe"-help2 in the console to view the parameters. Here we will not describe them one by one, the following describes the commands used for online operations:
Online command execution:-hosts n Node_1 Node_2... Node_n-noprompt ExePath
"-Hosts" indicates that it is going to run online. n indicates that there are n machines involved in the operation. Node_1 (and Node_2...) indicates the computer name or IP address of each machine. The file path where ExePath is exe is d: \ MPI \ cpi.exe.
3.3 online effect demonstration
This result can be compared with the result of the standalone operation above.
It is not difficult to find that, with the help of MPI, it is easy to use multiple computers for parallel computing.
Author: "MoreWindows"