Automatic deployment of Hadoop clusters based on Kickstart

Source: Internet
Author: User
Tags imap hadoop fs
This article introduces a highly automatic RedHatLinux Installation Method in CentOS unattended installation Based on KickstartPXE. Because Kickstart supports scripts, Kickstart technology can also be used to automate the deployment of Hadoop clusters. This article tries to build a method to automatically deploy the Hadoop Cluster Based on the Resource Allocation file using the Kickstart script.

This article introduces a highly automatic RedHat Linux Installation Method in CentOS unattended installation Based on Kickstart PXE. Because Kickstart supports scripts, Kickstart technology can also be used to automate the deployment of Hadoop clusters. This article tries to build a method to automatically deploy the Hadoop Cluster Based on the Resource Allocation file using the Kickstart script.

This article introduces a highly automatic RedHat Linux Installation Method in CentOS unattended installation Based on Kickstart & PXE. Because Kickstart supports scripts, Kickstart technology can also be used to automate the deployment of Hadoop clusters. This article tries to build a solution to use the Kickstart script to automatically deploy the Hadoop Cluster Based on the Resource Allocation file.

Kickstart configuration file structure

The Kickstart file consists of three parts in the specified order, and each part has no internal order requirements. The three parts are in order: [1]

  • Command Section, which should include required options.
  • % Packages section, which selects the software package to be installed.
  • % Pre and % post parts, which can be arranged in any order and are not required.

During the installation of Ret Hat series Linux, Anaconda installation manager creates a simple Kickstart file that is saved as/root/anaconda-ks.cfg. You can directly modify this file to create our own Kickstart configuration file. The following is the basic Kickstart file used by the text:

# Kickstart file automatically generated by anaconda.#version=DEVELinstallurl --url=http://192.168.60.144/pxe/lang zh_CN.UTF-8keyboard usnetwork --onboot yes --device eth0 --bootproto dhcp --noipv6#password=rootrootpw  --iscrypted $6$.L9W0uhR$TxVuurKHI254jwC9i0I6q/TPzJc.2RQYLy/YP.v5xfgzsOsP1ylRR0uvkLNP/ibfPmNiWkFrqtDJ.wBOJ5unu1firewall --disabledauthconfig --enableshadow --passalgo=sha512selinux --disabledtimezone --utc Asia/Shanghaitextbootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"# The following is the partition information you requested# Note that any partitions you deleted are not expressed# here so unless you clear all partitions first, this is# not guaranteed to workzerombr#autostep --autoscreenshot#ignoredisk --only-use=sdaclearpart --all --drives=sdapart / --bytes-per-inode=4096 --fstype="ext4" --size=4096part /boot --bytes-per-inode=4096 --fstype="ext4" --size=100part swap --bytes-per-inode=4096 --fstype="swap" --size=1024part /home --bytes-per-inode=4096 --fstype="ext4" --grow --size=1#repo --name="CentOS"  --baseurl=cdrom:sr0 --cost=100%packages --nobase@core%endhalt

This file does not contain the % post Script, which will be added later using the script.

% Pre is the preinstallation script of Kickstart, which can be found in ks. add the command to run immediately after the cfg file is parsed. this part must be at the end of the kickstart file (after the command part) and must start with the % pre command. you can access the network in % pre. However, the naming service is not configured yet, so you can only use IP addresses. Note that the pre-installation script is not run in the changed root environment (chroot.

% Pre some common options
-- Interpreter/usr/bin/python: allows you to specify different scripting languages, such as Python. Replace/usr/bin/python with the expected scripting language. [2]

% Post is the post-installation script, which must start at the end of kickstart and start with the % post command. it is used to implement certain functions, such as installing other software and configuring other naming servers.
Note that if you use static IP information and naming server to configure the network, you can access and resolve the IP address in % post. if DHCP is used to configure the network, when the installer executes to % post,/etc/resolv. the conf file is not ready yet. in this case, you can access the network but cannot resolve the IP address. therefore, if DHCP is used, the IP address must be specified in % post. in addition, the post-install script runs in the chroot environment. therefore, some tasks such as copying scripts or RPMs from the installation media cannot be executed. [3]
Some common options for % post:

-- Nochroot # allows you to specify commands that you want to run outside the chroot environment.
In the following example, copy the/etc/resolv. conf file in the installation media to the file system you just installed.

%post --nochroot cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf

-- Interpreter/usr/bin/python # allows you to specify different scripting languages, such as Python. Replace/usr/bin/python with the desired scripting language.
-- Log =/tmp/post-install.log # specify the path to save the post Script Execution log

This article uses the post Script to install and automatically configure the Hadoop cluster.

Hadoop cluster resource allocation file structure

The resource allocation file is used to allocate Network Resources of the Hadoop cluster. The post Script searches for the resource allocation File Based on the MAC address of the Local Machine, obtains the host name, IP address, and other information of the local machine, and then configures it automatically. The MAC address must be known in advance, which is generally provided by manufacturers. Specify the virtual machine. Note: Do not use the auto-generate MAC address function in VMware. The host that starts at 00:50:56 will become 00: 0C: 29, so you must manually specify 00: 0C: the MAC address starting with 29.

An example Resource Allocation File

# Host name MAC address IP Address Mask gateway running process cluster role ID Master 00: 0C: 29: 11: 00: 00 192.168.60.20 255.255.255.0 192.168.60.2 NameNode, JobTracker master HADOOPSlave1 00: 0C: 29: 00: 00: 01 192.168.60.31 255.255.255.0 192.168.60.2 DataNode, TaskTracker slave HADOOPSlave2 00: 0C: 29: 00: 00: 02 192.168.60.32 255.255.255.0 192.168.60.2 DataNode, TaskTracker slave HADOOP
Automated Deployment Solution

First, you need to build a PXE Installer server that can install and configure Hadoop clusters. The server provides the required software packages, resource configuration files, and Hadoop installation script [4]. In this experiment cluster, each machine is configured as an rsync server to synchronize the public key files of the Master.

Topology:

Deploy the Hadoop topology using Kickstart

Deployment Flowchart

Automatic deployment of Hadoop using PXE

Implementation deployment: Add options for the PXE Server configuration script

Modify the PXE Server configuration script, add Parameter options, and add functions to configure automatic deployment of Hadoop.

# Check the parameter [$ #-eq 0] & {PrintHelp; exit 1;} [$1! = "-D"] & amp; [$1! = "-H"] | [$1 = "-- help"] & {PrintHelp; exit 1;} if [$1 = "-h"]; then [$ #-eq 1] & {echo-e "Please specify the resourcefile. examples can be found at conf/network. conf. \ nExample :. /init_pxeserver.sh-h conf/network. conf "; exit 1 ;}[! -F $2] & {echo-e "resourcefile error! Please check resourcefile path \ n "; exit 1;} fi

Set PXE Installer to different types based on parameters.-d is default, that is, the Centos Automatic Installation server, and-h is the Hadoop automatic Deployment Server. Their differences are mainly the differences between ks files. If the option is-h, configure the ks file for Hadoop automatic deployment.

[ $1 = "-h" ] && HadoopKS 2>&1 | tee -a /tmp/init_pxeserver.log
Automatically configure the Hadoop ks File

Add a post Script For Hadoop deployment to the default ks file. Use cat> ks. cfg < Configure network information based on resource allocation files

First, obtain the MAC address of the local machine, then use the MAC address as the index to find the local IP address, mask, gateway, host name, and other information, and then automatically configure. The script also configures the hosts file of the Cluster machine and adds the record of all hosts in the cluster.

curl -o network.conf http://$IPADDR/$CONFDIR/network.conf &>/dev/null && echo -e "[SUCC]: resourcefile download ok! "MAC=`ifconfig eth0 |grep HWaddr |awk '{print \$5}'`IP=`grep "\$MAC" network.conf |grep HADOOP |awk '{print \$3}'`HOST=`grep "\$MAC" network.conf |grep HADOOP |awk '{print \$1}'`NETMASK=`grep "\$MAC" network.conf |grep HADOOP |awk '{print \$4}'`GATEWAY=`grep "\$MAC" network.conf |grep HADOOP |awk '{print \$5}'`#set static ip addrIFCFG=/etc/sysconfig/network-scripts/ifcfg-eth0sed -i "s/BOOTPROTO.*/BOOTPROTO=static/g" \$IFCFGecho "IPADDR=\$IP" >>\$IFCFGecho "NETMASK=\$NETMASK" >>\$IFCFGecho "GATEWAY=\$GATEWAY" >>\$IFCFG# add hostname to /etc/hostscat network.conf |grep HADOOP |awk '{print \$3" "\$1}' >>/etc/hosts# reset hostnamesed -i "s/HOSTNAME.*/HOSTNAME=\$HOST/g" /etc/sysconfig/network
? Install Hadoop

This is relatively simple. download the software package and installation script from the Installer and run the installation script.

mkdir soft && cd softcurl -o $HADOOPFILE http://$IPADDR/$SOFTDIR/$HADOOPFILE &>/dev/null && echo -e "[SUCC]: hadoop download ok! "curl -o $JDKFILE http://$IPADDR/$SOFTDIR/$JDKFILE &>/dev/null && echo -e "[SUCC]: jdk download ok! "cd ../curl -o hadoop_centos.sh http://$IPADDR/$CONFDIR/hadoop_centos.sh &>/dev/null && echo -e "[SUCC]: hadoop_centos.sh download ok! "bash hadoop_centos.sh &>/dev/null && echo -e "[SUCC]: run hadoop_centos.sh ok! "
? Configure rsync Service

Slave needs to synchronize the master key. You can configure the rsync server [5] that allows anonymous access on the master, and then pull the master public key on the slave.

Curl-o xinetd-2.3.14-39.el6_4.x86_64.rpm http: // $ IPADDR/$ SOFTDIR/package/xinetd-2.3.14-39.el6_4.x86_64.rpm &>/dev/null & echo-e "[SUCC]: xinetd down OK! "Curl-o rsync-3.0.6-9.el6_4.1.x86_64.rpm http: // $ IPADDR/$ SOFTDIR/package/rsync-3.0.6-9.el6_4.1.x86_64.rpm &>/dev/null & echo-e" [SUCC]: rsync down OK! "Rpm-ivh xinetd-2.3.14-39.el6_4.x86_64.rpm &>/dev/null & echo-e" [SUCC]: xinetd install OK! "Rpm-ivh rsync-3.0.6-9.el6_4.1.x86_64.rpm &>/dev/null & echo-e" [SUCC]: rsync install OK! "Sed-I" s/disable. */disable = no/g "/etc/xinetd. d/rsynccat>/etc/rsyncd. conf <
 
  
/Etc/rsyncd. pwd <
  
   
? Synchronize the master Public Key
   

Copy the public key file id_dsa.pub of the master node to the rsync server directory and pull the Server Load balancer. This involves a failed re-transmission process. Because slave may be earlier than the master installation process, when slave is executed to pull the key, the master may not have prepared the key. If the script fails to pull the public key by setting the slave, wait for 5 seconds and try again. A total of 100 retries are made. Therefore, make sure that the master is earlier than slave. In addition, when adding a new slave to a cluster, keep the master in the starting state.

# slave pull d_dsa.pub from masterMASTER=`cat network.conf |grep master |awk '{print \$1}'`   # hostname of masterSLAVE=`cat network.conf |grep slave |awk '{print \$1}'`     # hostname of slaveif [ "\$HOST" = "\$MASTER" ]; thencp $HOMEDIR/.ssh/id_dsa.pub /tmp/id_dsa.pubcd /tmpchmod 777 id_dsa.pubelsei=0while [ \$i -lt 100 ]do#pull from masterrsync Master::pubkey/id_dsa.pub . &>/dev/null && r=0 || r=1#check if rsync successfull and retryif [ \$r -eq 0 ]; thenbreak;elsesleep 5#sleep 5 second then retry((i+=1))#retry 100 timesecho "retry  \$i ..."fidonecat $HOMEDIR/.ssh/authorized_keys | grep `cat id_dsa.pub`  &>/dev/null && r=0 || r=1[ \$r -eq 1 ] && cat id_dsa.pub >> $HOMEDIR/.ssh/authorized_keysfi# add id_dsa.pub of Controllercat $HOMEDIR/.ssh/authorized_keys | grep "$KEY"  &>/dev/null && r=0 || r=1[ \$r -eq 1 ] && echo "$KEY" >> $HOMEDIR/.ssh/authorized_keys
? Hadoop cluster configuration

Involves the configuration of several files for core-site.xml, hdfs-site.xml, mapred-site.xml, and masters, slaves [6]. Note that, for example, when adding a new slave to a cluster, add it directly at the end of the resource allocation file. The newly added cluster has the issue that the masters, slaves, and hosts files are not synchronized. You can consider using cfengine or puppet for later solutions.

# Configure cluster cd/etc/hadoop # use the following core-site.xmlcat> core-site.xml <
    
     
     
     
      
       
        
Fs. default. name
       
       
        
Hdfs: // \ $ MASTER: 9000
       
      
     Core # use the following hdfs-site.xmlcat> hdfs-site.xml <
     
      
      
      
       
        
         
Dfs. replication
        
        
         
2
        
       
      Hdfs # use the following mapred-site.xmlcat> mapred-site.xml <
      
       
       
       
        
         
           Mapred. job. tracker
         
         
           \ $ MASTER: 9001
         
        
       Mapred # Add master and slave host names cat> masters <
       
        
Slaves <slaves \ $ SLAVEslaves
       
      
     
    
Cluster verification test

When Hadoop is started for the first time, you need to format Hadoop HDFS and execute:

[root@Master ~]# hadoop namenode -format14/05/12 01:07:43 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = Master/192.168.60.20STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 1.2.1STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:27:42 PDT 2013STARTUP_MSG:   java = 1.7.0_51************************************************************/Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) yFormat aborted in /tmp/hadoop-root/dfs/name14/05/12 01:07:59 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at Master/192.168.60.20************************************************************/

Start Hadoop and run

[root@Master ~]# start-all.shstarting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-Master.outSlave1: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-Slave1.outSlave2: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-Slave2.outMaster: starting secondarynamenode, logging to /var/log/hadoop/root/hadoop-root-secondarynamenode-Master.outstarting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-Master.outSlave1: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-Slave1.outSlave2: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-Slave2.out

In the host, edit the hosts file and add the ip record of the Hadoop cluster Host:

#Hadoop192.168.60.10  ctrl192.168.60.20  master192.168.60.31  slave1192.168.60.32  slave2

Then access http: // master: 50070 and http: // master: 50030 in a browser.

Jobtracker

NameNode

Test the mapreduce program. Create several local text files and upload them to HDFS:

[root@Master ~]# mkdir hadoop[root@Master ~]# cd hadoop/[root@Master hadoop]# mkdir input[root@Master hadoop]# echo "hello hadoop" >>input/hadoop.txt[root@Master hadoop]# echo "hello world" >>input/hello.txt[root@Master hadoop]# echo "hi my name is hadoop" >>input/hi.txt[root@Master hadoop]# hadoop fs -mkdir input[root@Master hadoop]# hadoop fs -put input/* input

View files on HDFS in a browser

Browse HDFS files

Execute Wordcount

[root@Master hadoop]# hadoop jar /usr/share/hadoop/hadoop-examples-1.2.1.jar wordcount input/ output14/05/12 01:30:00 INFO input.FileInputFormat: Total input paths to process : 314/05/12 01:30:00 INFO util.NativeCodeLoader: Loaded the native-hadoop library14/05/12 01:30:00 WARN snappy.LoadSnappy: Snappy native library not loaded14/05/12 01:30:01 INFO mapred.JobClient: Running job: job_201405120108_000214/05/12 01:30:02 INFO mapred.JobClient:  map 0% reduce 0%14/05/12 01:30:20 INFO mapred.JobClient:  map 33% reduce 0%14/05/12 01:30:28 INFO mapred.JobClient:  map 100% reduce 0%14/05/12 01:30:35 INFO mapred.JobClient:  map 100% reduce 100%14/05/12 01:30:37 INFO mapred.JobClient: Job complete: job_201405120108_000214/05/12 01:30:37 INFO mapred.JobClient: Counters: 2914/05/12 01:30:37 INFO mapred.JobClient:   Job Counters14/05/12 01:30:37 INFO mapred.JobClient:     Launched reduce tasks=114/05/12 01:30:37 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4935514/05/12 01:30:37 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=014/05/12 01:30:37 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=014/05/12 01:30:37 INFO mapred.JobClient:     Launched map tasks=314/05/12 01:30:37 INFO mapred.JobClient:     Data-local map tasks=314/05/12 01:30:37 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=1423614/05/12 01:30:37 INFO mapred.JobClient:   File Output Format Counters14/05/12 01:30:37 INFO mapred.JobClient:     Bytes Written=4714/05/12 01:30:37 INFO mapred.JobClient:   FileSystemCounters14/05/12 01:30:37 INFO mapred.JobClient:     FILE_BYTES_READ=10614/05/12 01:30:37 INFO mapred.JobClient:     HDFS_BYTES_READ=37114/05/12 01:30:37 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21871114/05/12 01:30:37 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=4714/05/12 01:30:37 INFO mapred.JobClient:   File Input Format Counters14/05/12 01:30:37 INFO mapred.JobClient:     Bytes Read=4614/05/12 01:30:37 INFO mapred.JobClient:   Map-Reduce Framework14/05/12 01:30:37 INFO mapred.JobClient:     Map output materialized bytes=11814/05/12 01:30:37 INFO mapred.JobClient:     Map input records=314/05/12 01:30:37 INFO mapred.JobClient:     Reduce shuffle bytes=11814/05/12 01:30:37 INFO mapred.JobClient:     Spilled Records=1814/05/12 01:30:37 INFO mapred.JobClient:     Map output bytes=8214/05/12 01:30:37 INFO mapred.JobClient:     Total committed heap usage (bytes)=61756211214/05/12 01:30:37 INFO mapred.JobClient:     CPU time spent (ms)=619014/05/12 01:30:37 INFO mapred.JobClient:     Combine input records=914/05/12 01:30:37 INFO mapred.JobClient:     SPLIT_RAW_BYTES=32514/05/12 01:30:37 INFO mapred.JobClient:     Reduce input records=914/05/12 01:30:37 INFO mapred.JobClient:     Reduce input groups=714/05/12 01:30:37 INFO mapred.JobClient:     Combine output records=914/05/12 01:30:37 INFO mapred.JobClient:     Physical memory (bytes) snapshot=55352524814/05/12 01:30:37 INFO mapred.JobClient:     Reduce output records=714/05/12 01:30:37 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=289424998414/05/12 01:30:37 INFO mapred.JobClient:     Map output records=9

View in a browser

Running task

Completed task

View results

[root@Master hadoop]# hadoop fs -cat output/*hadoop  2hello   2hi      1is      1my      1name    1world   1cat: File does not exist: /user/root/output/_logs

View results in a browser

Wordcount execution result

Problems? Remove M-oM-; M -?

Use notepad ++ and encode it as utf8 without BOM

Rsync prompts no route to master

The key always fails to be pulled, and rsync cannot connect to the master. After adding the debugging code, you can find that you need to restart the network after setting the network information in the post Script. Although the IP address information of each host in the cluster exists in the hosts, the IP address of each host is automatically obtained from the PXE Server before the network is restarted. The timing problem is involved here, because slave needs to access the master, the master needs to configure the network and restart the network before it can be accessed by slave. Therefore, slave must wait until the master configuration is complete. The script setting can wait for a maximum of 100x5 s, that is, 500 s.

^[[36m[SUCC]: Stop iptables ^[[0m^[[36m[SUCC]: Chkconfig iptables off ^[[0m/tmp^[[36m[SUCC]: resourcefile download ok! ^[[0m^[[36m[SUCC]: hadoop download ok! ^[[0m^[[36m[SUCC]: jdk download ok! ^[[0m^[[36m[SUCC]: hadoop_centos.sh download ok! ^[[0m^[[36m[SUCC]: xinetd down ok! ^[[0m^[[36m[SUCC]: rsync down ok! ^[[0m^[[36m[SUCC]: xinetd install ok! ^[[0m^[[36m[SUCC]: rsync install ok! ^[[0mStopping xinetd: ^[[60G[^[[0;31mFAILED^[[0;39m]^MStarting xinetd: ^[[60G[^[[0;32m  OK  ^[[0;39m]^Mrsync: failed to connect to Master: No route to host (113)rsync error: error in socket IO (code 10) at clientserver.c(124) [receiver=3.0.6]

In addition, the firewall may also cause rsync failure. You can directly disable the firewall in the ks file. [7]

Firewall (optional) The option corresponds to the "firewall configuration" Screen in the installer: firewall -- enabled | -- disabled [-- trust =][-- Port =] -- enabled or -- enable: reject incoming connections that are not used to reply to output requests, such as DNS replies or DHCP requests. to use a service running on this machine, you can select to allow the specified service to pass through the firewall. -- disabled or -- disable. do not configure any iptables rules. -- trust =: Lists devices here, such as eth0, which allows all packets passing through this device to pass through the firewall. to list multiple devices, use -- trust eth0 -- trust eth1. do not use a comma-separated format, such as -- trust eth0, eth1.
     
      
To allow the specified service to pass through the firewall. -- ssh -- telnet -- smtp -- http -- ftp -- port =, you can use the port: Protocol (port: protocal) format to specify the ports allowed through the firewall. for example, if you want to allow IMAP to pass through the firewall, you can specify imap: tcp. you can also specify a port number to allow UDP groups to pass the firewall on port 1234 and enter 1234: udp. to specify multiple ports, separate them with commas.
     
? When logging on to ssh, the system prompts "whether to add the HostKey"

When logging on to a machine via ssh, if this machine has never been logged on via ssh (strictly speaking, it should be ~ /. Ssh/known_hosts file does not contain the HostKey of this machine). Then, ssh will generate a prompt asking if you need to add the HostKey of this machine and answer yes/no, even if you do not delete it ~ The HostKey of the machine in the/. ssh/known_hosts file will not be prompted, but this will become a problem if we need to write some automated scripts.
Man gave a look at ssh_config and found a solution: create a file ~ /. Ssh/config, add a row:
StrictHostKeyChecking no
You can. Later, ssh will automatically add the HostKey ~ /. Ssh/known_hosts. No more questions will be asked. The default configuration item is ask. If yes is configured, you must manually add the hostkey ~ This is the strictest configuration in the/. ssh/known_hosts file. [8]

References

[1]. ChinaUnix blog. http://blog.chinaunix.net/uid-17240700-id-2813881.html
[2]. ChinaUnix blog. http://blog.chinaunix.net/uid-17240700-id-2813881.html
[3]. ChinaUnix blog. http://blog.chinaunix.net/uid-17240700-id-2813881.html
[4]. zhixing Lisi. http://www.annhe.net/article-2672.html
[5]. CSDN blog. http://blog.csdn.net/zombee/article/details/6793672
[6]. Lu jiaxiao. Hadoop Version 2
[7]. ChinaUnix blog. http://blog.chinaunix.net/uid-17240700-id-2813881.html
[8]. 163 blog. http://blog.163.com/kartwall@126/blog/static/8942370200831485241268/

Complete appendix code

See github: https://github.com/annProg/paper/blob/master/code/init_pxeserver.sh

Additional reading

[1].? MORE-Kickstart-Tips-and-Tricks .? Tricks.
Http://www.redhat.com/promo/summit/2010/presentations/summit/decoding-the-code/wed/cshabazi-530-more/MORE-Kickstart-Tips-and-Tricks.pdf

This article complies with the CC copyright agreement. For more information, see the source in the form of links.
Link: http://www.annhe.net/article-2798.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.