Summary of accelerated installation of Amber11 + AmberTools1.5 + CUDA

Source: Internet
Author: User
Tags unzip extract cuda toolkit intel mkl gromacs

Summary of accelerated installation of Amber11 + AmberTools1.5 + CUDA

The following installation method is based on some of the previous posts on the Forum simulated by the numerator. The installation and testing can be successful as long as the operation is correct. Considering that Amber11 is generally installed on clusters, the intel compiler and Openmpi parallel tool are used for installation. You need to purchase the Amber11 software to obtain the license. You can download the AmberTools from www.ambermd.org for free.

Installation environment:
Dell Precision Workstation T3400 à Q9550 8g ecc à Geforce GTX 560ti (2 GB)
CentOS 6.2 X86-64 à Intel Compilers (iforc, icc, iMKL) à Openmpi-1.4.3
CUDA Toolkit 4.0

1.
Install intel compilers
Download the non-commercial version of intel C ++ compiler (icc) and intel fortran compiler (ifort) from the intel official website. The current version is 2011.6.233, at the same time, you will get a non-commercial license (one year available), which will be sent to the email you entered during your application ).
:
Http://software.intel.com/en-us/articles/non-commercial-software-download/
Decompress the package, go to the decompressed directory, and install it:
Cd/home/soft/l_fcompxe_2011.6.233
./Install. sh
# Select "Use a license file" when activating the product Option"
# You can remove unnecessary parts of the installation options, such as Intel Debugger, But intel MKL should be retained.
# Use the same method to install icc (l_ccompxe_2011.6.233). In the same installation option, only Intel C ++ Compiler is selected,
# Set the environment variable gedit. bashrc for intel
Source/opt/intel/composer_xe_201?sp1.6.233/bin/compilervars. sh intel64
Export MKL_HOME =/opt/intel/mkl

2. install nVidia toolkit
# Download "CUDA Toolkit 4.0" (CUDA Toolkit for RedHat Enterprise Linux 6.0) from the Nvidia website)
: Http://developer.nvidia.com/cuda-toolkit-40 and Installation
./Cudatoolkit_4.0.17_linux_64_rhel6.0.run
# Set the environment variable gedit. bashrc for CUDA
Export PATH = $ PATH:/usr/local/cuda/bin
Export LD_LIBRARY_PATH = $ LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/lib
Export CUDA_HOME =/usr/local/cuda

3. Decompress Amber and AmberTools
# First set the environment variable gedit. bashrc for Amber11
Export AMBERHOME =/home/soft/amber11
Export PATH = $ PATH:/home/soft/amber11/bin
Export DO_PARALLEL = "mpirun-np 4"
Unzip extract amber11.tar.bz2 to directory:/home/soft/amber11, then unzip the AmberTools-1.5.tar.bz2 to the same directory:/home/soft/amber11 (replace all ),
# Integrated with intel MKL from l_fcompxe_2011.6.233, an error message will be prompted during installation of Amber, therefore, you need to change the/home/soft/amber11/AmberTools/src/configure file before installing AmberTools to find "em64t": mkll = "$ MKL_HOME/lib/em64t ". replace em64t with "intel64 ".

4. Patch Amber11 and AmberTools
# Patch AmberTools: Download "bugfix. all" for AmberTools1.5 from the Amber website and place it in the AMBERHOME directory. : Http://ambermd.org/bugfixesat.html
Cd $ AMBERHOME
Patch-p0-N <bugfix. all
# Patch for Amber11: Download bugfix package of Amber11 and apply_bugfix.x on the Amber Website: http://ambermd.org/bugfixes11.html
Chmod 700 apply_bugfix.x
./Apply_bugfix.x bugfix.1to17.tar.bz2

5. Install serial AmberTools 1.5
Cd/home/soft/amber11/AmberTools/src
./Configure intel
Make serial
# This step takes more than 10 minutes, which is time consuming. Test
Cd ../test
Make test
# Check the check. diff file to see if any error occurs. The file is located:
(/Home/soft/amber11/AmberTools/test/logs/test_at_serial)

6. Install serial Amber11
Cd/home/soft/amber11
./AT15_Amber11.py
Cd src
Make serial
# Test method:
Cd/home/soft/amber11/test
Make test
# You can find the check. diff file in the/home/soft/amber11/test/logs/test_amber_serial directory.

7. Install CUDA-accelerated PMEMD
# Currently, only PMEMD supports CUDA acceleration in Amber11
Cd/home/soft/amber11/AmberTools/src
Make clean
./Configure-cuda intel
Cd/home/soft/amber11/
./AT15_Amber11.py
Cd src
Make clean
Make cuda
# Test method:
Cd/home/soft/amber11/test/
./Test_amber_cuda.sh
Log files are also available in the/home/soft/amber11/test/logs/test_amber_cuda directory.

8. Install openmpi-1.4.3 within AmberTools
Download openmpi-1.4.3.tar.bz2 (http://www.open-mpi.org /)
# Cp openmpi-1.4.3.tar.bz2 AmberTools/src

Tar-zxvf openmpi-1.4.3.tar.bz2
./Configure_openmpi intel
# Add openmpi Environment Variables
Export MPI_HOME = $ AMBERHOME/AmberTools
Export PATH = $ AMBERHOME/AmberTools/exe: $ PATH
Export LD_LIBRARY_PATH = $ AMBERHOME/AmberTools/lib: $ LD_LIBRARY_PATH
# In this case, openmpi still does not take effect. When which mpirun is used, it is found that mpir is integrated in intel. Therefore, we need to change the mpi directory integrated with intel to a name to make openmpi take effect.

9. Install parallel version Amber11
Cd/home/soft/amber11/AmberTools/src
./Configure-mpi intel
Cd/home/soft/amber11
./AT15_Amber11.py
Cd src
Make clean
Make parallel
# Test method:
Cd/home/soft/amber11/test
Make test. parallel

10. reinstall openmpi
# Installation directory:/home/soft/openmpi
# Block the openmpi environment variables installed in the ambertools directory and reinstall them, and Add. bashrc.
# Openmpi
Export MPI_HOME =/home/soft/openmpi
Export PATH =/home/soft/openmpi/bin: $ PATH
Export LD_LIBRARY_PATH =/home/soft/openmpi/lib: $ LD_LIBRARY_PATH
# This step is not necessary, but it is convenient to install other parallel software such as gromacs in the future.

11. Copy/boot/. bashrc to your own. bashrc
# The above installation is performed under the root user, and can also be installed under normal users.
# Final. bashrc file for a common user:
#/Home/yuanxh/. bashrc
#___________________________________________________________
# Intel icc
Source/home/soft/intel/composer_xe_201?sp1.6.233/bin/compilervars. sh intel64
Export MKL_HOME =/home/soft/intel/mkl
#________________________________________________________________
# Amber11
Export AMBERHOME =/home/soft/amber11
Export PATH = $ PATH:/home/soft/amber11/bin
Export DO_PARALLEL = "mpirun-np 4"
#________________________________________________________________
# Openmpi
Export MPI_HOME =/home/soft/openmpi
Export PATH =/home/soft/openmpi/bin: $ PATH
Export LD_LIBRARY_PATH =/home/soft/openmpi/lib: $ LD_LIBRARY_PATH
#________________________________________________________________
# Fftw
Export CPPFLAGS =-I/opt/fftw3/include
Export LDFLAGS =-L/opt/fftw3/lib
Export FFTW_LOCATION =/opt/fftw3
Export FFTW3F_LIBRARIES =/opt/fftw3/lib
Export FFTW3F_ROOT_DIR =/opt/fftw3
Export FFTW3F_INCLUDE_DIR =/opt/fftw3/include
#________________________________________________________________
# Gromacs-4.5.5
Export PATH = $ PATH:/home/soft/gmx/bin
Export LD_LIBRARY_PATH = $ LD_LIBRARY_PATH:/home/soft/gmx/lib
#________________________________________________________________
# VMD
Export PATH = $ PATH:/home/soft/vmd/bin
Export LD_LIBRARY_PATH = $ LD_LIBRARY_PATH:/home/soft/vmd/lib
#________________________________________________________________

12. Notes and other instructions

* Note 1: if the machine is 64-bit, the installed software must be 64-bit.
* Note 2: Check the system of the machine, such as mips, intel, and opteron. The compiler has different options based on different CPU types. intel Core recommends intel compiler, and opteron core recommends pgi compiler. The gnu Compiler is strongly not recommended and its running efficiency is too low. Opteron generally does not support intel compilers, so you can choose gnu or pgi compilers. Ensure that the same compiler and the same compilation options are used for compiling parallel libraries and AMBER.
* Note 3: The same compilation options should be used during compilation, either compiled into 32 bits or 64 bits. Parallel databases cannot be compiled into 32 bits, while AMBER can be compiled into 64 bits, and vice versa.
* NOTE 4: Parallel libraries include OpenMPI, LAM, and MPICH. OpenMPI is easy to use. At the same time, MPICH supports 1000 Mbit/s network speed and does not support Inifiband high-speed networks. On a high-performance machine, use parallel computing. Choose OpenMPI. If you are in a general cluster, you can use MPICH.
* Note 5: if you use the pgi compiler, there is an option for different CPUs (-tp). You should pay special attention to it. You can use man to help the system check before compiling. The netcdf library attached to AMBER9 seems to have a problem. The Makefile file of this library is independent from the config. h generated by AMBER. Therefore, configuration inconsistency may occur in some systems, and the definition of the library function cannot be found during the link. The solution is to manually modify the Makefile in the netcdf directory so that its compiler, compilation, and link options are consistent with the files generated by AMBER. This problem seems to have been encountered only in the case of pgi/opteron.
* Note 6: Before testing the parallel version, set the environment variable, for example, export DO_PARALLEL = 'mpirun-np 4'. The actual parameters are different for different machines.
* Note 7: The OPENMPI versions supported by configure_openmpi are 1.4.2 and 1.4.3.
* Note 8: MKL supports 10.0 or 11.0 series this time. If you are using version 9.0 or earlier, you must add the-oldmkl parameter to configure.
* Note 9: The parallelism parameters are simplified. No matter what parallelism is used, the parameters are-mpi (provided that environment variables are set)
* Note 10: you do not need to install PMEMD separately in Amber11.

Caffe + Ubuntu 14.04 64bit + CUDA 6.5 configuration instructions

Install and configure CUDA in Ubuntu 14.04

Install NVIDIA CUDA5.5 in Ubuntu 12.04

Install Theano + CUDA on Ubuntu

For Ubuntu 12.04 CUDA5.5 installation, see the following link Ubuntu 12.04 installation CUDA-5.5

Installation Process of CUDA (including GPU driver) in Ubuntu

Install NVIDIA driver + CUDA + MATLAB in Ubuntu 14.04

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.