Cluster Hadoop Ubuntu Edition

Source: Internet
Author: User

Build the Ubuntu Hadoop cluster

Tools used: VMware, Hadoop-2.7.2.tar, Jdk-8u65-linux-x64.tar, Ubuntu-16.04-desktop-amd64.iso

1. Install Ubuntu-16.04-desktop-amd64.iso on VMware

Click "Create Virtual Machine" è select "typical (recommended installation)" È click "Next"

È click Finish

Modify/etc/hostname

Vim hostname

Save exit

Modify Etc/hosts

127.0.0.1    localhost192.168.1.100    s100192.168.1.101    s101192.168.1.102    s102192.168.1.103    s103192.168.1.104    s104192.168.1.105    s105

Configuring the NAT Network

View IP addresses and gateways under the WINDOW10

Configure/etc/network/interfaces

#interfaces (5) file used by Ifup (8) and Ifdown (8) #The loopback network Interfaceauto loiface lo inet loopback#iface eth0 i NET staticiface eth0 inet staticaddress 192.168.1.105netmask 255.255.255.0gateway 192.168.1.2dns-nameservers 192.168.1.2auto eth0

Can also be configured via graphical interface

Ping www.baidu.com after configuration to see if the network is already working.

After the network is over, you need to ping the client host before you make the following configuration

Modifying a host C:\windows\system32\drivers\etc\hosts file

File contents

127.0.0.1       localhost192.168.1.100 s100192.168.1.101 s101192.168.1.102 s102192.168.1.103 s103192.168.1.104 s104192.168.1.105 s105

Installing Ubuntu 163 14.04 Source

$>cd/etc/apt/

$>gedit sources.list

Remember to make backups before you configure

Deb Http://mirrors.163.com/ubuntu/trusty main restricted universe Multiversedeb http://mirrors.163.com/ubuntu/ Trusty-security main restricted universe multiversedeb http://mirrors.163.com/ubuntu/trusty-updates main restricted Universe Multiversedeb http://mirrors.163.com/ubuntu/trusty-proposed main restricted universe Multiversedeb/HTTP Mirrors.163.com/ubuntu/trusty-backports main restricted universe multiversedeb-src http://mirrors.163.com/ubuntu/ Trusty main restricted universe multiversedeb-src http://mirrors.163.com/ubuntu/trusty-security main restricted Universe multiversedeb-src http://mirrors.163.com/ubuntu/trusty-updates main restricted universe MULTIVERSEDEB-SRC http://mirrors.163.com/ubuntu/trusty-proposed main restricted universe multiversedeb-src http://mirrors.163.com/ Ubuntu/trusty-backports Main restricted Universe multiverse

Update

$>apt-get Update

Create a new Soft folder in the home root directory mkdir soft

However, after the establishment is complete, the file belongs to the root user, modify the permissions chown enmoedu:enmoedu soft/

Install shared Folders

Put the file on the desktop, right click on "Extract here"

Switch to enmoedu user's home directory, Cd/desktop/vmware-tools-distrib

Execute the./vmware-install.pl file

Enter key to execute

Installation Complete

Copy Hadoop-2.7.2.tar, Jdk-8u65-linux-x64.tar to enmoedu home directory/downloads

$> sudo cp hadoop-2.7.2.tar.gz jdk-8u65-linux-x64.tar.gz ~/downloads/

Unzip Hadoop-2.7.2.tar, Jdk-8u65-linux-x64.tar to current directory respectively

$> TAR-ZXVF hadoop-2.7.2.tar.gz

$>TAR-ZXVF jdk-8u65-linux-x64.tar.gz

$>cp-r Hadoop-2.7.2/soft

$>cp-r jdk1.8.0_65//soft

Create a linked file

$>ln-s Hadoop-2.7.2/hadoop

$>ln-s JDK1.8.0_65/JDK

$>ls-ll

Configuring Environment variables

$>vim/etc/environment

Java_home=/soft/jdkhadoop_home=/soft/hadooppath= "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin :/usr/games:/usr/local/games:/soft/jdk/bin:/soft/hadoop/bin:/soft/hadoop/sbin "

Make environment variables effective

$>source Environment

Verify that the installation is successful

$>java–version

$>hadoop version

Configure the configuration file under/soft/hadoop/etc/hadoop/

[Core-site.xml]

<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs:// s100/</value>    </property>    <property>          <name>hadoop.tmp.dir</name>          <value>/home/enmoedu/hadoop</value>    </property></configuration>

[Hdfs-site.xml]

<configuration>    <property>        <name>dfs.replication</name>        <value>3 </value>    </property>    <property>          <name> dfs.namenode.secondary.http-address</name>           <value>s104:50090</value>      < Description> the        secondary namenode HTTP server address and port.      </description></property></configuration>

[Mapred-site.xml]

<configuration>    <property>        <name>mapreduce.framework.name</name>        <value >yarn</value>    </property></configuration>

[Yarn-site.xml]

<configuration>    <property>        <name>yarn.resourcemanager.hostname</name>        < value>s100</value>    </property>    <property>        <name> yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </ Property></configuration>

Configure SSH login without password

Installing SSH

$>sudo apt-get Install SSH

Generate key Pair

Executed under the enmoedu home directory

$>ssh-keygen-t Rsa-p "-F ~/.ssh/id_rsa

Import public key data into the authorization library

Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

After test localhost succeeds, copy the key from the master node to the authorization library

Where Root executes as you like

$>ssh localhost

Test for success from the master node.

Modify the Slaves file

[/soft/hadoop/etc/hadoop/slaves]

s101s102s103s105

The rest of the machines, by cloning, modifying hostname and network configuration can be

After the completion of the tower construction

Format HDFs File system

$>hadoop Namenode–format

Start All Processes

start-all.sh

Final Result:

Custom Script Xsync (distributing files in the cluster)

[/usr/local/bin]

The file is recycled to the same directory as all nodes.

[Usr/local/bin/xsync]

#!/bin/bashpcount=$ #if ((pcount<1)); then    Echo no args;    Exit;fip1=$1;fname= ' basename $p 1 ' #echo $fname = $fname;pd ir= ' cd-p $ (dirname $p 1); PWD ' #echo pdir= $pdircuser = ' WhoAmI ' for ((host=101;host<106;host=host+1)); Do    echo------------s$host----------------    RSYNC-RVL $pdir/$fname [email protected] $host: $pdirdone

Test

Xsync Hello.txt

Custom Script Xcall (executes the same command on all hosts)

[Usr/local/bin]#!/bin/bashpcount=$ #if ((pcount<1)); then    Echo no args;    Exit;fiecho-----------localhost----------------[email protected]for (host=101;host<106;host=host+1); Do    echo------------s$host-------------    ssh s$host [email protected]done

Test Xcall RM–RF Hello.txt

After the cluster is built, test run the following command

Touch A.txtgedit a.txthadoop fs-mkdir-p/user/enmoedu/datahadoop fs-put a.txt/user/enmoedu/datahadoop FS-LSR/

You can also go to the browser to view

Cluster Hadoop Ubuntu Edition

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.