The lazy person records the cluster Construction Process of Hadoop2.7.1, and the lazy person hadoop2.7.1

Source: Internet
Author: User

The lazy person records the cluster Construction Process of Hadoop2.7.1, and the lazy person hadoop2.7.1

 

The lazy person records the cluster Construction Process of Hadoop2.7.1

13:15:45

 

 

  • Summary
    • In addition to configuring hosts and password-free connection, you must first install everything on one machine.
    • Copy the VM and configure the hosts and password-free connection.
    • Jdk used 32-bit before the company installed it. hadoop native packages cannot be loaded normally, which wastes a lot of time for self-compilation. Therefore, jdk must be 64-bit.
    • Configure password-free connection
    • Nothing else. Note that the user group of the file is not necessarily "hadoop". Set it according to your own situation.
      • Sudo chown-R hadoop/opt
      • Sudo chgrp-R hadoop/opt
  • Prepare files
  • Virtual Machine installation and configuration

    We need three VMS. We can first install one Vm, download hadoop, configure JDK, set environment variables, and copy the VM.

 

  • By now, the first virtual machine has almost been configured. Copy the Virtual Machine in two copies (note that it is completely copied and the mac address needs to be reset), and there will be three virtual machines, namely the master, slave1, slave2
    •   
    • Modify the hostname of slave1 and slave2 as slave1-hadoop, slave2-hadoop
    • Modify the hosts of the three hosts
      • 192.168.56.101 master
        192.168.56.102 slave1
        192.168.56.103 slave2
    • The ip address is not required. You need to check the ip address of the VM.
  • You can configure the master to log on to the other two machines and yourself without a password.
    • Operate on the master
    • Ssh-keygen-t rsa-p''Select the default operation for all operations. Enter the password
    • Ssh-copy-id hadoop @ master
    • Ssh-copy-id hadoop @ slave1
    • Ssh-copy-id hadoop @ slave2
    • Test ssh slave1 and connect it directly to slave1 without a password.
      hadoop@master-hadoop ~ $ ssh-keygen -t rsa -P ''Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):Created directory '/home/hadoop/.ssh'.Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:5c:c9:4c:0c:b6:28:eb:21:b9:6f:db:6e:3f:ee:0d:9a hadoop@master-hadoopThe key's randomart image is:+--[ RSA 2048]----+|        oo.      ||       o =..     ||    . . . =      ||   . o . .       ||  o o   S        ||   + .           ||  . .   .        ||   ....o.o       ||   .o+E++..      |+-----------------+hadoop@master-hadoop ~ $ ssh-copy-id hadoop@slave1The authenticity of host 'slave1 (192.168.56.102)' can't be established.ECDSA key fingerprint is d8:fc:32:ed:a7:2c:e1:c7:d7:15:89:b9:f6:97:fb:c3.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keyshadoop@slave1's password:Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'hadoop@slave1'"and check to make sure that only the key(s) you wanted were added.

       

       

  • Format namenode
    • ./Bin/hdfs namenode-format
  • Start hadoop to verify
    • ../Sbin/start-all.sh
    • The normal log is like this:
      hadoop@master-hadoop /opt/hadoop/sbin $ ./start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [master]master: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-master-hadoop.outslave1: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-slave1-hadoop.outslave2: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-slave2-hadoop.outStarting secondary namenodes [master]master: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadoop-secondarynamenode-master-hadoop.outstarting yarn daemonsstarting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-resourcemanager-master-hadoop.outslave1: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-slave1-hadoop.outslave2: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-slave2-hadoop.out

       

    • Check the jps of the three nodes.
      hadoop@master-hadoop /opt/hadoop/sbin $ jps5858 ResourceManager5706 SecondaryNameNode5514 NameNode6108 Jpshadoop@slave2-hadoop ~ $ jps3796 Jps3621 NodeManager3510 DataNodehadoop@slave1-hadoop ~ $ jps3786 Jps3646 NodeManager3535 DataNode
  • Everything works. installation is complete.

  

       

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.