Ubantu 16.4 Hadoop Fully distributed build

Source: Internet
Author: User
Tags rsync

A virtual machine

    • 1. Mount the virtual machine in NAT Nic mode
    • 2. It is best to use a few virtual machines to modify the hostname, static ip/etc/network/interface, here is S101 s102 s103 three host Ubantu, change/etc/hostname file
    • 3. Install SSH
      • In the first host there s101 create a public-private key
        • Ssh-keygen-t Rsa-p "-F ~/.ssh/id_rsa
        • >CD. SSH
        • >CP id_rsa.pub >authorized_keys Creating key libraries
        • Upload the id_rsa.pub to another host, to the. SSH directory
          1. Through the service-side nc-l 8888 >~/.ssh/authorized_keys
          2. Client NC s102 8888 <id_rsa.pub

Start installing HADOOP/JDK

    1. Install Vm-tools easy to drag files from win 10 to Ubantu
    2. Create Directory/soft
    3. Change group chown Ubantu:ubantu/soft Convenient transfer files have permission
    4. Put files into/soft (can cp/mv src DST from desktop)
      • TAR-ZXVF JDK or Hadoop automatically creates an extract directory
      • Configuring the Installation Environment (/etc/environment)
        1. Add java_home=/soft/jdk-... JDK Directory
        2. Add Hadoop_home=/soft/hadoop (HADOOP directory)
        3. Add/soft/jdk-... jdk/bin:/soft/hadoop/bin/:/soft/hadoop/sbin in Path
        4. View a version number successfully through Java-version
        5. Version number is successful in Hadoop version

Start configuring HDFs four files Core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml

    1. Core-site.xml
<Configuration>  < Property>      <name>Fs.defaultfs</name>      <value>hdfs://s101:9000</value>  </ Property></Configuration>

2.hdfs-site.xml

<Configuration><!--configurations for NameNode: -< Property>  <name>Dfs.replication</name>  <value>2</value></ Property>< Property>  <name>Dfs.namenode.name.dir</name>  <value>File:/data/hdfs/name</value></ Property>< Property>  <name>Dfs.datanode.data.dir</name>  <value>File:/data/hdfs/data</value></ Property>< Property>  <name>Dfs.namenode.secondary.http-address</name>  <value>s101:50090</value></ Property>< Property>  <name>Dfs.namenode.http-address</name>  <value>s101:50070</value>  <Description>The address and the base port where the Dfs Namenode Web UI would listen on.  If the port is 0 then the server would start on a free port. </Description></ Property> < Property>  <name>Dfs.namenode.checkpoint.dir</name>  <value>File:/data/hdfs/checkpoint</value></ Property>< Property>  <name>Dfs.namenode.checkpoint.edits.dir</name>  <value>File:/data/hdfs/edits</value></ Property></Configuration>

3. Mapred-site.xml

<Configuration>  < Property>       <name>Mapreduce.framework.name</name>       <value>Yarn</value>   </ Property></Configuration>

4.yarn-site.xml

<Configuration><!--Site specific YARN configuration Properties -  < Property>          <name>Yarn.nodemanager.aux-services</name>          <value>Mapreduce_shuffle</value>  </ Property>  < Property>          <name>Yarn.resourcemanager.hostname</name>          <value>S101</value>  </ Property></Configuration>

Half the success of this .......

Create a folder

mkdir/data/hdfs/tmpmkdir/data/hdfs/varmkdir/data/hdfs/logsmkdir/data/hdfs/dfsmkdir/data/hdfs/datamkdir/data/ Hdfs/namemkdir/data/hdfs/checkpointmkdir/data/hdfs/edits

Remember to modify directory permissions

    • sudo chown ubantu:ubantu/data

Next transfer the/soft folder to another host

Create a xsync executable file

  1. sudo touch xsync
  2. sudo chmod 777 Xsync permissions into an executable file
  3. sudo nano xsync
  4. #!/bin/Bashpcount=$#if((pcount<1)); Then    Echono args; Exit;fiP1=$1; fname=`basename$p 1 ' pdir= ' Cd-p $ (dirname$p 1);pwd' Cuser=`WhoAmI` for((host=102; host< the; host=host+1)); Do    Echo--------S$host--------rsync-RVL $pdir/$fname [email protected] $host: $pdir Done

  5. Xsync/soft--------> will send folders to other hosts
  6. Xsync/data

Create Xcall to send commands to other hosts

#!/bin/Bashpcount=$#if((pcount<1));  then    Echo no args;    Exit; fi echo --------localhost--------[email protected] for (host=102; host <host=host+1));  do    echo --------$shost--------    ssh  s$host [email protected] Done

Don't worry, it's almost over, huh?

Also need to configure workers ask price

    • Place a host name that needs to be configured as a data node (DataNode), one line at a

Attention to the point.

    • Format Hadoop-namenode-format First
    • Start start-all.sh again.
    • View Process Xcall JPS
    • Go to Web page

    • is not very want cow tears, success!!!

There's a lot of problems in the middle

1, rsync permissions not enough: Delete folder change folder permissions Chown

2. Learn to read log logs

Ubantu 16.4 Hadoop Fully distributed build

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.