1. linux InstallationThe installation process is not detailed. Note that during installation, if the cluster is not connected to the outside world, you can select rsh as a trusted service regardless of security considerations. Be sure to install the corresponding software package; if you need to connect to the outside world, you should select ssh as a trusted service for security purposes. After installation, ensure that each node can log on to each other using ssh. Each node's sshd should be able to provide services normally. Host Name: node1. .. noden (the system I have built n = 2)2. Create an NFS serviceCreate the mpi directory under the public directory of the server node and configure it as an NFS server. Add a line in the/etc/exports file:/public/mpi node1 (rw) node2 (rw) add a line to the/etc/fstab file on the client node: sever:/pubilc/mpi nfs rw, bg, soft 0 0. Output The/public/mpi directory from the server node, it is loaded on various clients to facilitate task distribution among nodes.3. Modify the/etc/hosts fileFill in the names of all nodes with extremely high IP addresses. Example: 127.0.0.1 localhost. localdomain localhost192.168.1.1 node1194251.2 node2 ......................... Similar configurations are performed on each node. In this way, nodes can access each other through the name of node1... noden. You can ping noden or ssh noden for testing.4. Modify (or create )/In the etc/hosts. equiv file, enter the names of all the machines that you are allowed to access the local machine for mpi computing, with one machine name in one row. This step aims to delegate power to other nodes. For example, my node1 is the machine used to start mpi cluster computing, and other nodes are involved in computing, in/etc/hosts of node1. the equiv file is like this: node1 # grant permissions to yourself, so that you can simulate the parallel computing environment node2 ..... noden in node2... /etc/hosts of noden. equiv file: node1 # node1 permissions node2 ...... noden5. Modify ~ /. Bash_profile FileFirst, a user name used to start cluster computing is determined. root is not recommended for cluster computing. Here, a new user chief is created on each node. Their main directories are/home/chief. The same password must be used, and future computing programs must be placed in the same path. For example, if your program is fpi. f and a. out, you must put a. out in the same path, such ~ /Mpirun/a. out. This is true for each node. Modify ~ /. The bash_profile file is mainly used to add the following lines of scripts: export PATH = $ PATH: /usr/local/mpich/binexport MPI_USEP4SSPORT = yesexport MPI_P4SSPORT = 22 export P4_RSHCOMMAND = rsh or ssh. We have booked the future mpich runtime environment to be installed in the directory/usr/local/mpich. The other three variables are used to notify the mpi runtime environment to use rsh (or ssh) as the remote shell. Now the linux operating environment is configured.6. Configure rsh or sshIf you use rsh to run MPI as a remote shell, you only need to ensure that you have the same user on each node and set this user's password to a blank password. If ssh is used as the remote shell, configure as follows: log on to the user you set to start mpi computing and run ssh-keygen, which generates a private/public key pair, stored in ~ /. Ssh/identity and ~ /. Ssh/identity. pub file. Then perform Access Authorization and run: cp ~ /. Ssh/identity. pub ~ /. Ssh/authorized_keyschmod go-rwx ~ /. Ssh/authorized_keysssh-agent $ SHELLssh-add repeat at each node. Try to log on to another node on a certain node, ssh noden, then in. generate a known_hosts2 file under ssh/, containing the access key to the host, collect all the keys, and make the same copy on each node. In this way, no password is required for each node to access each other.7. Start required servicesIf you use the root user root to log on to the system, you can use the ntsysv command to start the ntsysv utility. The ntsysv utility allows you to start or close services of different running levels on a simple menu interface. Here, we choose to enable rsh, rlogin, telnet, and so on. You can also disable some services to speed up startup, such as sendmail. If you use the su command to convert it to the root user, it is very likely that the ntsysv does not show the ntsysvs utility. In this case, you can directly modify the rlogin, rsh, and telenet settings under/etc/xinetd. d. Open xinetd In the vi editor. d: vi/etc/xinetd. d, you can see The following configuration file: rsh settings are as follows # default: off # description: the rshd server is the server for The rcmd (3) routine and, \ # consequently, for the rsh (1) program. the server provides \ # remote execution facilities with authentication based on \ # privileged port numbers from trusted hosts. service shell {disable = yes socket_type = stream wait = no user = root log_on_success + = USERID log_on_failure + = USERID server =/usr/sbin/in. rshd} the rlogin settings are as follows # default: off # description: rlogind is the server for the rlogin (1) program. the server \ # provides a remote login facility with authentication based on \ # privileged port numbers from trusted hosts. service login {disable = yes socket_type = stream wait = no user = root log_on_success + = USERID log_on_failure + = USERID serve R =/usr/sbin/in. rlogind} The telnet settings are as follows # default: off # description: The telnet server serves telnet sessions; it uses \ # unencrypted username/password pairs for authentication. service telnet {disable = yesflags = REUSEsocket_type = streamwait = nouser = rootserver =/usr/sbin/in. telnetdlog_on_failure + = USERID} All these services are disabled by default after the system is installed. You need to modify these services to enable them. If you want to start telenet through modification, you need to change disable = yes to disable = no. Modifications to the startup of other services are the same. To start these services, simply execute: #/etc/rc. d/init. d/xinetd restart or restart the computer to make the changes take effect. C. Compile and install the Fortran90 compiler on the server node. Copy the Fortran90 compiler from Intel to/tmp and decompress it with tar xvfz fortran90.tar.gz j. Run./install select the type you want to install and select 1 if your machine is based on the IA-32. If your machine is based on Itanium (TM)-based system, select 2. If you want to stop the installation, select X. After the selection, press enter to go to the next step. You will be asked to select: 1. intel (R) Fortran Complier for 32-bit Applications, Version 6.0 build 020312Z2. 2. linux Applicationdebugger for 32-bit Applications, Version 6.0 buid 20020x. select 1, 2 in the order of Exit. Finally, select X to complete the installation and exit. After selecting 1, you will be asked to read the copyright statement and enter Accept to continue the installation. The default installation path is/opt/intel. Select the default path and press Enter to continue. The case after Option 2 is the same as that after option 1. If you do not register, you can use 90 days. D. Compile and install mpich 1.2.3 on the server node. Set mpich 1.2.4: ftp://ftp.mcs.anl.gov/pub/mpi/mpich.tar.gz. Copy it to a temporary directory and put it under/tmp. Log on to the root user for compilation and installation.1. pre-process mpich installation:First, use tar xvfz mpich.tar.gz to decompress the package. Generate a mpich-1.2.3 directory. Switch to the mpich-1.2.3 directory. Run preprocessing :. /configure -- prefix =/usr/local/mpi -- corresponds to the system that uses rsh for remote logon. /configure -- prefix =/usr/local/mpi-rsh = ssh -- it corresponds to the System Using ssh for remote login. Here we notify the compiling system that the mpich installation location is/usr/local/ mpich, the remote shell in the running environment is rsh or ssh.2. Compile:After running this command, the mpich software package will automatically compile and form the MPI system function library, which takes several minutes, depending on different machines.3. installation:Make install run this command to install the MPI software package to. /configure -- prefix specifies the MPICH installation directory:/usr/local/mpi Modify file/usr/local/mpi/share/util/machines. LINUX. As follows: node1ndoe2... ndoen indicates that all nodes in the mpich runtime environment are available for cluster computing. Enter each node in this way.4. Check whether the installation is correct.Compile cpi. c: make cpi under/usr/local/mpi/examples/basics and run it with the command line:.../../bin/mpirun-np 2 cp. The following information is obtained. Processes 0 on node1Processes 1 on node2 ............................ If it works properly, the mpich package is successfully installed.