Hadoop downloads, installs, configures on Linux platforms

Source: Internet
Author: User
Tags shuffle

The Linux version I am using here is the CentOS 6.4 centos-6.4-i386-bin-dvd1.iso:http://mirrors.aliyun.com/centos/6.8/isos/i386/0. Using host- The only way to change the virtual network card on Windows to the same network segment as the network card on Linux Note: Be sure to windowsh WMnet1 IP settings and your virtual machine in the same network segment, but the IP is not the same, the previous work: 1. Modify the Linux IP hand Changes can also be ordered to modify vim/etc/sysconfig/network-scripts/ifcfg-eth02. Change the hostname (note the Ubuntu version mode) vim/etc/sysconfig/networkChange the previous name to itcast013. Modify the hostname and IP corresponding relationship vim/etc/hosts     192.168.8.88      itcast01 4. Turn off the firewall       View the fence status      service iptables status           Close      service iptables Stop            View firewall boot status      chkconfig iptables--list       shutdown boot      chkconfig iptables off       II, installing Java jdk  & nbsp   Here is jdk-7u60-linux-i586.tar.gz, where I use the Vmware--> shared folder, (need to install) The VMware Tools tool, so that we can use the shared folder mode of the file under Windows , shared to the Linux platform. Share in/mnt/hdfs/      mkdir/usr/java     tar-zxvf jdk-7u60-linux-i586.tar.gz- c/usr/java           Adding Java to environment variables      vim/etc/profile           Add the following at the end of the file      export java_home=/usr/java/jdk1.7.0_60     export path= $PATH: $JAVA _home/bin      Refresh configuration      source/etc/profile  third, install Hadoop download hadoophttps://archive.apache.org/dist/ https://archive.apache.org/dist/hadoop/core/hadoop-2.2.0/This download is: hadoop-2.2.0.tar.gz 1. Upload the Hadoop package, I use FileZilla to upload to the root directory under Linux     2. Extract Hadoop packages       First create a/itcast directory at the root directory            mkdir/itcast          tar-zxvf  Hadoop-2.2.0.tar.gz-c/itcast 3. Configuring Hadoop Pseudo-distributed (to modify 4 files under etc/)   first:hadoop-env.sh     vim hadoop-env.sh     export java_home=/usr/java/jdk1.7.0_60  second one: core-site.xml<configuration >          <!--the address of the Boss (NameNode) used to designate HDFs-->        &LT;PROPERTY&GT ;                <name>fs.defaultFS</name>                <value>hdfs://itcast01:9000</value>        &Lt;/property>          <!--used to specify the directory where the Hadoop runtime generates files-->        &LT;PR operty>                <name>hadoop.tmp.dir</name>    & nbsp           <value>/itcast/hadoop-2.2.0/tmp</value>        </ property></configuration>  Third:hdfs-site.xml     <configuration>            <!--Specify the number of HDFs save data copies-->           < property>                <name>dfs.replication</name>                <value>1</value>        &LT;/PROPERTY&GT ;</configuration>  Fourth: Mapred-site.xml (need to copy mapred-site.xml.template from this file)      < configuration>          <!--tellHadoop after Mr runs on yarn-->        <property>              &NBS P <name>mapreduce.framework.name</name>                <value> yarn</value>        </property>     </configuration>  Fifth:yarn-site.xml <configuration>          <!--NodeManager the way to get data is shuffle-- >        <property>                &LT;NAME&GT;YARN.N odemanager.aux-services</name>                <value>mapreduce_ shuffle</value>        </property>          <!-- Specify yarn's boss (ResourceManager) address-->        <property>                <name>yarn.resourcemanager.hostname</name>                <value>itcast01</value>      &NBS P </property></configuration> 4. Adding Hadoop to an environment variable Vim/etc/profileexport java_home=/usr/java/ Jdk1.7.0_60export hadoop_home=/itcast/hadoop-2.2.0export path= $PATH: $JAVA _home/bin: $HADOOP _home/bin                    #刷新配置           SOURCE/ETC/PROFILE&N Bsp;5. Initialize HDFS (format file system, this step is similar to just buy a USB flash drive needs to be formatted)           #hadoop Namenode-format (outdated )      hdfs namenode-format 6. Launcher HDFs and yarn     ./start-all.sh (OBSOLETE) This script is Deprecated. Instead use start-dfs.sh and start-yarn.shstarting namenodes on [it]       #有个小问题 (password required multiple times)   & nbsp   Next, use JPS to view the process conditions     &NBSP;JPS (JPS Some simple cases of the current Java process on the Linux/unix platform), and if you have the following process, the test passes           4334 NodeManager3720 NameNode4060 ResourceManager3806 DataNode4414 jps       In addition, we can view it in the browser under the Windows platform, whether to build success       http://192.168.8.88:50070 (HDFs management interface)      http://192.168.8.88:8088 (Yarn management interface)        ADD linux hostname and IP mappings in this file      c:\Windows\System32\drivers\etc      At the end, add      192.168.8.88     itcast01          

Hadoop downloads, installs, configures under Linux platforms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.