Install and configure PETSc in Linux

Source: Internet
Author: User
Tags lapack gfortran
In Linux, The PETSc installation and configuration-general Linux technology-Linux programming and kernel information. For more information, see the following. After several days of exploration, Ren Zhigang and I finally tried the PETSc configuration method and affirmed that we knew nothing about it. All these things basically came from the network, we just verified the correctness of things on the internet and made some small changes.

Learning PETSc first, let's look at its "PETSc Users Manual". Here we also have a PETSc introduction to the "PETSc parallel programming method". Because it is Chinese, it seems easier. We are eager to learn and make progress together. To install PETSc, you must first configure the MPI parallel environment and BLAS and LAPACK. Therefore, we will introduce the installation and configuration of PETSc in three steps.

1. Installation manual for the MPICH2 Cluster System in Linux (using ssh for building trust)
Software Download:
Make sure you have a copy of a newer version of MPICH2,: http://www.mcs.anl.gov/research/... ich2-1.0.6p1.tar.gz download mpich2-1.0.6p1.tar.gz, which is a version under LINUX.

Parallel Environment creation:
L create an SSH trusted connection (in the root directory)

1. Change the/etc/hosts file
# Gedit/etc/hosts open the hosts file and change it as follows:

127.0.0.1 localhost. localdomain localhost
IP scc-m of node01
Node01 IP node01
Node02 IP node02
Node03 IP node03
Node04 IP node04

2. generate an SSH key pair in node01.
# Ssh-keygen-t rsa one-way carriage return
Generate a. ssh file,
# Ls-a check whether there is a. ssh folder

3. Enter the. ssh directory.
# Cd. ssh

4. Generate the authorized_keys File
# Cp id_rsa.pub authorized_keys

5. log out to the root directory.
# Cd ..

6. Establish a trusted connection
# Enter "yes" as prompted for ssh node01 (all three letters are required)

7. Set node02 (under the root directory of node02)
# Ssh-keygen-t rsa generation. ssh folder
# Scp node01 IP:/root/. ssh/*/root/. ssh copy the. ssh folder on node01 to overwrite the local
# Scp node01 IP:/etc/hosts copy the hosts file on node01 to overwrite the local
# Enter "yes" in the ssh node01 prompt and press Enter.

Set node03 and node04 in the same way as node02

8. confirm that the Trust connection of the four machines has been established.
Run the following command on each node:
# Ssh node01
# Ssh node02
# Ssh node03
# Ssh node04
Enter yes at the prompt and press Enter. Finally, make sure that you do not need to enter the password and have no prompt information to log on (except for "Last login: time and date" prompt information). Ensure that the trust has been established.

L install MPICH2 (in the root directory of the node)

1. Extract
# Tar-zxvf mpich2-1.0.6p1.tar.gz
Or # gunzip-c mpich2-1.0.1.tar.gz | tar xf mpich2-1.0.6p1.tar.gz

2. Create an installation directory
# Mkdir/usr/MPICH-instsll

3. Enter the mpich2 decompression directory.
# Cd mpich2-1.0.6p1

4. Set the installation directory
#./Configure -- prefix =/usr/MPICH-install

5. Compile
# Make

6. Installation
# Make install

7. log out to the root directory.
# Cd ..

8. Modify the environment variables by editing the. bashrc file.
# Gedit. bashrc
The modified. bashrc file is as follows: (# in the file indicates comment)

#. Bashrc

# User specific aliases and functions

Alias rm = 'rm-I'
Alias cp = 'cp-I'
Alias mv = 'mv-I'

PATH = "$ PATH:/usr/MPICH-install/bin" added

# Source global definitions
If [-f/etc/bashrc]; then
./Etc/bashrc
Fi



Save, launch, and enter the command

# Source. bashrc (make the modified. bashrc file take effect)

9. test environment variable settings
# Which mpd
# Which mpicc
# Which mpiexec
# Which mpirun

10. Modify the/etc/mpd. conf file with the content secretword = myword.
# Vi/etc/mpd. conf

Set File Read Permission and modification time
# Touch/etc/mpd. conf
# Chmod 600/etc/mpd. conf

11. Create a Host Name Collection file/root/mpd. hosts
# Gedit mpd. hosts

The file content is as follows:
Node01
Node02
Node03
Node04

L test

1. Local Test
# Mpd & start
# Mpdtrace watch start Machine
# Exit mpdallexit

2. Run the cluster system through mpd. hosts
# Mpdboot-n number-f mpd. hosts number indicates the number of machines to be started.
# Mpdtrace
# Mpdallexit


3. Test and run the MPICH example Program
# Mpdboot-n 4-f mpd. hosts start four machines
# Mpiexec-n number/usr/MPICH-install/examples/cpi number indicates the number of processes used.
# Mpdallexit

4. If the test fails, perform step 4.

L Problem Solving

1. Get a write help message through mpdcheck
# Mpdcheck-pc

2. Check for errors
# Mpdcheck-l

3. Check the error through the mpd. hosts file
# Mpdcheck-f mpd. hosts if no error occurs
# Mpdcheck-f mpd. hosts-ssh

4. skip this step if there is no error
Check any two machines for errors
M1: # mpdcheck-s output host name and port
M2: # mpdcheck-c host port

Note: The above four steps are performed without running mpd.

5. mpd Error
M1: # mpd-e & return port used
M2: # mpd-h m1-p echoed_port_m1 &



Note: Make sure that the firewall is disabled.

After the above tests are passed, the cluster system is built.



Ii. LAPACK Installation
Software Download:
Make sure there is a copy of lapack, and there is a BALS under lapack (all three levels are available), so we only need to download lapack. : Ftp://netlib.org/lapack download file: lapack-3.1.1.tgz. Netlib ftp server has a lot of resources below, here only need lapack-3.1.1.tgz temporarily.

Installation steps:
Type

1) gzip? Cd lapack-3.1.1.tgz | tar xf? (Decompress the file)

2) cd lapack-3.1.1

3) cp make. inc. example make. inc

4) gedit make. inc (modify make. inc)

.........
FORTRAN = g77
OPTS =-funroll-all-loops? O 3
DRVOPTS = $ (OPTS)
NOOPT =
LOADER = g77
LOADOPTS =
.........
Modify:

.........
FORTRAN = gfortran
OPTS =-funroll-all-loops-O3-msse2-mfpmath = sse-ftree-vectorize-g
DRVOPTS = $ (OPTS)
NOOPT =
LOADER = gfortran
LOADOPTS =
.........

Save
5) gedit Makefile (modify the Makefile file)

Because blas is not installed before:

Include make. inc

All: lapack_install lib lapack_testing blas_testing

Lib: lapacklib tmglib
# Lib: blaslib lapacklib tmglib

Clean: cleanlib cleantesting cleanblas_testing
....

Modify:

Include make. inc

All: lapack_install lib lapack_testing blas_testing

# Lib: lapacklib tmglib
Lib: blaslib lapacklib tmglib

Clean: cleanlib cleantesting cleanblas_testing
... Save
6) make

7) Copy blas_LINUX.a lapack_LINUX.a and tmglib_LINUX.a to the/usr/lib and/usr/local/lib folders and change the name to libblas. a liblapack. a And libtmglib. a

Iii. PETSc Installation
Software Download:
Download the latest PETSc package: http://www-unix.mcs.anl.gov/petsc/petsc-2/download/index.html he updates very fast.

Installation steps:
Suppose you put the downloaded petsc package under the/home/username/soft folder (username indicates your username ).

(1) cd/home/username/soft

(2) gunzip-c petsc-2.3.3.tar.gz | tar-xof-example this Help File)

(3) cd petsc-2.3.3-p0 (go to the directory of the package you just decompressed)

(4) PETSC_DIR = $ PWD; export PETSC_DIR

(5 ). /config/configure. py -- with-cc = gcc -- with-fc = g77 -- with-cxx = g ++ -- with-blas-lapack-dir =/usr/local/lib -- with-mpi -dir =/usr/MPICH-install -- with-scalar-type = complex -- with-clanguage = cxx -- with-pic = 0 (there is no space before and after the equal sign)

(6) make

(7) make test

Organized by changacacia
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.