Refined Centos self-built Clusters

Source: Internet
Author: User

 

Refined self-built Clusters Part 1: Cluster System Configuration § 1. 1: System InstallationHere we use CentOS5.5 X64 as an example to build a cluster. We won't go into details about system installation. Remember to choose gcc, g ++, and gfortran when installing the custom software package. The name of the master node is frontend. after the cluster is installed, open the/etc/hosts file and make the following changes: 192.168.1.110 frontend. cluster frontend192.168.1.1 node1.cluster node1 192.168.1.2 node2.cluster node2192.168.1.3 node3.cluster node3PS: negotiate the IP addresses of all nodes according to your actual needs. The complete machine name is short for the machine name. All nodes in this file are consistent. § 1.2 : CloneZilla Clone a computing nodeTo facilitate subsequent verification, such as password-free logon, NFS, NIS, and torque, clone the system installed earlier. After cloning, modify the machine name node1 next to 127.0.0.1 In the first line of the/etc/hosts file, open/etc/sysconfig/network, and modify the machine name, another one is/etc/sysconfig/network-scripts/ifcfg-eth0, change the IP address to 192.168.1.1 of node1. After all the configurations are completed, clone the computer to all the machines and make the above modifications. § 1.3 : Password-free access implementation --- SSHCentOS5.5 is installed with SSH by default. If there is no such installation, mount the installation disc as the yum source and install it as follows: Step 1.1 ---- set/etc/yum. repos. d directory except the CentOS-Media.repo ***. repo file all enabled = 01.2 ---- or put other than CentOS-Media.repo. move all the repo files to a directory/or rename them **. repo. bak1.3 --- modify the CentOS-Media.repo file to enabled = 1 gpgcheck = 0 // ** remember baseurl = file:///media/cdrom-- At last, this cdrom is the name of the directory you have attached to the image. modify it accordingly./* do not allow yum to search for other yum source files on the network, otherwise, priorities will first find the files under the CentOS-Base.repo */step 2 ---------------------------------------------------------/*********** ISO method ************ */# mount-o loop xxx /***. iso/media/cdrom/********************/# mount-t iso9660/dev /hdc/media/cdrom step 3 ---------------------- # cd/media/cdrom/CentOS/* enter the rpm directory of the cd */# yum clean all # yum install ssh # yum localinsta Ll **. rpm -- enablerepo = c5-media/* c5-media is the title of the previous repo file with ssh, now configure login without a password, and then use ssh to log on to the node without entering a password. On the Control Node, cannot be root as hpc !) Run: $ ssh-keygen-t dsa $ cd/home/hpc /. ssh $ cat id_dsa.pub> authorized_keys, open the/etc/ssh/sshd_config file, and set the AuthorizedKeyFile of 45 rows. delete the comment of ssh/authorized_keys. Restart the sshd service: sudo service sshd restart ### ensure that the. ssh permission is 700. The authorized_keys permission is 644 ### § 1.3 : NFS --- File Sharing Between HostsFirst, install the NFS software. If yum is configured earlier, you can directly install nfs. If there is a problem above, you can extract the nfs-utils software from the disc and install it. The dependency seems to be useless. Of course, pormap is required. Start the portmap service, sudo service portmap start, and then start the nfs service. The sudo service nfs start adds it to the startup service chkconfig -- level 345 nfs on. Now, open the/etc/exportfs file, this file is used to configure the file system to be shared. Add the following content to this file:/home 192.168.1.0/255.255.255.0 (rw, sync, no_root_squash, subtree_check)/opt 192.168.1.0/255.255.255.0 (rw, sync, no_root_squash, subtree_check) /usr/local 192.168.1.0/255.255.255.0 (rw, sync, no_root_squash, subtree_check) restart the nfs service. the nfs restart service of the sudo service may need to start the nfslock service. You can use rpcinfo-p localhost to check whether the corresponding service port is enabled. Execute exportfs-arv to re-mount the exports file once, or use showmount-e localhost to view the shared directories ----------- and restart the service. Make sure there are no errors, if there are any error messages, follow the prompts and solve the problem in the log file. -------------------------- To configure the computing node, start the portmap service and nfs service first, maybe the nfslock service, and then check whether the shared directory of the master node can be accessed through showmount-e frontend. If everything works properly, try mounting one of them. The command is as follows: mount-t nfs frontend:/home/mnt, modify the/ets/fstab file of all nodes and add the following lines to the file: frontend:/home nfs rw, defaults 0 0 frontend:/opt nfs rw, defaults 0 0 frontend:/usr/local nfs rw, defaults 0 0 and then restart the network service: sudo service network restart § 1.4 : NIS --- Account synchronization between hosts  First, server-side settings1. install the NIS software. you have configured yum. You can directly install ypserv. This is the server software. Generally, yp-tool and ypbind clients are already installed. If there is a problem above, you can extract the ypserv software from the CD and install it. The dependency seems to be useless. Of course, pormap is required. 2. set the NIS domain name: directly set the nisdomainname cluster. ----- if you want to set the startup, write/bin/nisdomainname domainname to/etc/rc. d/rc. local ----- if you need to enable NIS, it is automatically set. Write the NISDOMAIN domainname3. main configuration file/etc/ypserv in/etc/sysconfig/network. conf already exists by default. You can directly remove the comment of the last line. Or write: NISDOMAIN cluster server frontend ### remember to modify it according to your actual situation. 4. start related services, service portmap/ypserver/yppasswd start, set chkconfig -- level 345 ypserv/yppasswd on for startup --- separate multiple services and enter rpcinfo-u localhost ypserv. If the output is as follows, the startup is successful; program 100004 version 1 ready and waitingprogram 100004 version 2 ready and waiting5. create a database file #/usr/lib/yp/ypinit-m # if it is a 64-bit system, it may be in the lib64 directory. Press ctrl + d database file after next host to add after the local domain name appears as prompted Second, client settings1. create and confirm the Domain Name: Same as step 2 above. start ypbind, enable/etc/default/nis, modify NISSERVER = false, and NISCLIENT = true, so that the local machine becomes the NIS client. This is troublesome and can be automatically configured using several commands) authconfig -- enablenis -- usemd5 -- useshadow -- update can be used to open the ypbind service. service ypbind start still needs to be started automatically. to verify NIS, use yptest to view the output. If an error occurs in the third part, the system prompts a nobody error. The second is ypwhich-x to check the database quantity and its correspondence. The last is ypcat passwd. byname reads the passwd file on the NIS server. The UID is greater than or equal to 4 of 500. delete a common user of a computing node and log on to the test node from the master node using an hpc account. ##### To create a user, Remember to re-create the database and restart ypserv And yppasswdd Service ### ### Or cd/var/yp; make ### § 1.3 : Shared Resource Management Software TORQUE Installation and settingTORQUE and Maui can be downloaded from http://www.clusterresources.com. The following is a rough configuration, see the relevant manual for detailed configuration: · TORQUE: http://www.clusterresources.com/torquedocs21/· Maui: http://www.clusterresources.com/products/maui/docs/mauiusers.shtml 1. Install TORQUE on a service nodeAssume that the sub-machine name of the service node is frontend, and the name of a computing node is node1. Root @ frontend # tar zxvf torque-2.4.6.tar.gz root @ frontend # cd torque-2.4.6 root @ frontend #. /con transport gure -- pre transfer x =/opt/torque-2.4.6-with-scp above-with-rcp = rcp is set to transfer files between nodes using the rsh protocol, you can also set it to-with-rcp = scp to use the scp protocol for transmission. To use rcp or scp for transmission, you need to configure password-free access between nodes. For details, see the relevant documentation. Root @ frontend # makeroot @ frontend # make install. 2. Initialize service nodes and Set TORQUEPut the directory where TORQUE executable files are located in the system path and modify/etc/pro role le: TORQUE =/opt/torque −2.4.6maui =/opt/maui −3.3.1path = $ PATH: $ TORQUE/bin: $ TORQUE/sbin: $ MAUI/bin: $ MAUI/sbinLD_LIBRARY_PATH = $ LD_LIBRARY_PATH: $ TORQUE/lib: $ MAUI/lib. If you have set it here and the Maui installation path is above, you do not need to set the Maui path later to make the set environment variables take effect: source/etc/pro role le sets root to the management account of TORQUE: root @ frontend #. /torque setup root ###### switch to root here. I tried sudo as a common user and prompted that the command does not exist. ######

This article from "persistence is victory" blog, please be sure to keep this source http://lilinji.blog.51cto.com/5441000/1149706

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.