Hadoop cluster Installation-install Hadoop2.5.2__hadoop

Source: Internet
Author: User
Tags create directory failover zookeeper
1. Now the virtual machine collection

192.168.137.2 Node1
192.168.137.3 Node2
192.168.137.4 Node3
192.168.137.5 Node4


2. Configure SSH password-free login

The following two lines are run on the node1,2,3,4:

  Ssh-keygen-t Dsa-p '-F ~/.ssh/id_dsa
  cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Append the Node1 id_dsa.pub to the Authorized_keys of the other node

SCP id_dsa.pub root@node2:~
cat id_dsa.pub  >> ~/.ssh/authorized_keys



3. Download Hadoop download hadoop-aboutyun-linux64-2.5.2-.tar.gz to/root directory. Note that the official website does not have 64-bit bin version, you need to compile. I was looking for a compile on the Internet.
1) Unzip tar-zxvf hadoop-aboutyun-linux64-2.5.2-.tar.gz
2 to establish a soft chain ln-sf/root/ hadoop-2.5.2/home/hadoop-2.5.2
3) modifies hadoop-env.sh into/home/hadoop-2.5.2/etc/hadoop/, Modify hadoop-env.sh
java_home=/opt/java/jdk1.8.0_111

4) Modify Hdfs-site.xml file

<configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value > </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2
  </value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node1:8020</value> </property> <property> <name> Dfs.namenode.rpc-address.mycluster.nn2</name> <value>node2:8020</value> </property> < Property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node1:50070</value > </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value >node2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</ Name> <value>qjournal://node2:8485;node3:8485;node4:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value> Org.apache.hadoop.hdfs.server.namenode.ha.configuredfailoverproxyprovider</value> </property> < property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property > <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_ dsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value >/opt/jn/data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled
 </name> <value>true</value> </property> </configuration>

5) Configure Core-site.xml
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs:// mycluster</value>
</property>
 <property>
   <name>ha.zookeeper.quorum</ name>
   <value>node1:2181,node2:2181,node3:2181</value>
 </property>

 < property>
   <name>hadoop.tmp.dir</name>
   <value>/opt/hadoop2</value>
 </property>

</configuration>

6) Modify SlavesAdd in Slaves file
Node2
Node3
Node4


7 copy Hadoop to other nodes

SCP hadoop-aboutyun-linux64-2.5.2-.tar.gz root@node2:~/
SCP hadoop-aboutyun-linux64-2.5.2-.tar.gz root@node3:~/
SCP hadoop-aboutyun-linux64-2.5.2-.tar.gz root@node4:~/
Unzip on the respective nodes and build the soft chain

Enter/home/hadoop-2.5.2/etc/hadoop/
Copy all of the following configuration files to other nodes
SCP./* root@node2:/home/hadoop-2.5.2/etc/hadoop/
SCP./* root@node3:/home/hadoop-2.5.2/etc/hadoop/
SCP./* root@node4:/home/hadoop-2.5.2/etc/hadoop/


4. Download Zookeeper 1) Download zookeeper, extract to/root/ tar-zxvf zookeeper-3.4.6.tar.gz

2) Establishment of soft chain Ln-sf/root/zookeeper-3.4.6/home/zookeeper
3) Configure Zoo.cfg copy to get Zoo.cfg
CP Zoo_sample.cfg Zoo.cfg

Modify Zoo.cfg:
Will Datadir=/opt/zookeeper
End Plus server.1=node1:2888:3888
server.2=node2:2888:3888

server.3=node3:2888:3888 4) myID
Create directory
Mkdir/opt/zookeeper
This directory to create a file
VI myID
N Ode1 inside write 1

Copy this directory to Node2 and Node3
scp-r zookeeper/root@node2:/opt/
Scp-r zookeeper/root@node3:/opt/

myID Write 2,node3 inside Node2 write 3

5) Configure Zookeeper environment variable
under/etc/profile plus
Export path= $PA Th:/home/zookeeper/bin


Save, run source/etc/profile


Copy configuration files to two other node
Scp/etc/profile root@ node2:/etc/
Scp/etc/profile root@node3:/etc/

Run Source/etc/profile 6) Start Shutdown Firewall service iptables stop
Start
zkserver.sh start


5. Deployment 1) Start the node2,3,4 journalnode
Into the/home/hadoop-2.5.2/sbin/.


Run
./hadoop-daemon.sh Start Journalnode

2) format enters/home/hadoop-2.5.2/bin/execution./hdfs Namenode-format
Error: No Route to Host-node1/192.168.137.2 to node2:8485 failed on socket timeout exception:
Java.net.NoRouteToHostException: no route to host; For more detail
The reason is that the firewall is not closed
Systemctl Stop Firewalld.service can enter the
/home/hadoop-2.5.2/logs/
Under View Log Tail-n50 Hadoop-root-journalnode-node1.log


3) Namenode
Start the Node1 Namenode
Enter Directory/home/hadoop-2.5.2/sbin/
Execute./hadoop-daemon.sh start Namenode


Copy meta data to Node2
Node2 into Directory/home/hadoop-2.5.2/bin/

Execute./hdfs Namenode-bootstrapstandby


4) Close all components

Node1 into Directory/home/hadoop-2.5.2/sbin/

Execute./stop-dfs.sh
5) Formatzk
execution./hdfs Zkfc-formatzk

6) all started

Execute./start-dfs.sh, start all


7 View The Hosts file in the host and add


192.168.137.2 Node1
192.168.137.3 Node2
192.168.137.4 Node3
192.168.137.5 Node4


Enter http://node1:50070/or http://node2:50070/through the browser to access



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.