Hadoop 2.2, hbase0.94.18 cluster installation, hadoophbase0.94.18

Source: Internet
Author: User
Tags builtin

Hadoop 2.2, hbase0.94.18 cluster installation, hadoophbase0.94.18

Here we install three clusters: Cluster ip Address: 192.168.157.132-134

Master: 192.168.157.132

Slaves: 192.168.157.small-134

Hbase depends on zookeeper for self-installation.


1. Configure ssh password-free Login

1> Install ssh: 132, 134, 133

 yum install openssh-clients

2> Configure password-free login: 132, 134, 133

[Root @ localhost ~] # Ssh-keygen-t rsa one-way carriage return [root @ localhost ~] # Cd/root /. ssh/[root @ localhost. ssh] # cat id_rsa.pub> authorized_keys [root @ localhost. ssh] # scp authorized_keys root@192.168.157.132:/root /. ssh [root @ localhost. ssh] # scp authorized_keys root@192.168.157.133:/root /. ssh [root @ localhost. ssh] # scp authorized_keys root@192.168.157.134:/root /. ssh


Modify three Permissions

      [root@localhost ~]# chmod 700 .ssh/      [root@localhost ~]# chmod 600 ~/.ssh/authorized_keys 



2. Modify the machine name

1> modify by command: restart failed

     [root@localhost .ssh]# hostname dev-157-132     [root@localhost .ssh]# hostname      dev-157-132
2> modify the configuration file: It takes effect permanently
  [root@localhost .ssh]# vim /etc/sysconfig/network    NETWORKING=yes    NETWORKING_IPV6=no    HOSTNAME=dev-157-132    GATEWAY=192.168.248.254

3> three has been modified in two steps: dev-157-132 dev-157-133 dev-157-134
4> modify three/etc/hosts
 [root@localhost .ssh]# vim /etc/hosts        127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4        ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6       192.168.157.132 dev-157-132       192.168.157.133 dev-157-133       192.168.157.134 dev-157-134


3. Disable the firewall (three)

   service iptables stop


4. Install JDK... 5. Install hadoop2.2.0

 [root@dev-157-132 servers]# tar -xf hadoop-2.2.0.tar.gz [root@dev-157-132 servers]# cd hadoop-2.2.0/etc/hadoop
1> modify hadoop-env.sh
[Root @ dev-157-132 hadoop] # vim hadoop-env.sh export JAVA_HOME =/export/servers/jdk1.6.0 _ 25 (this java_home) Other Default

2> modify core-site.xml

 [root@dev-157-132 hadoop]# vim core-site.xml     <configuration>          <property>      <name>fs.defaultFS</name>        <value>hdfs://dev-157-132:9100</value>      <description></description></property>        <property>       <name>hadoop.tmp.dir</name>          <value>/export/servers/hadoop-2.2.0/data/hadoop_tmp</value>        <description></description> </property> <property>     <name>io.native.lib.available</name>      <value>true</value>      <description></description> </property></configuration>

3> modify mapred-site.xml

   <configuration>   <property>           <name>mapreduce.framework.name</name>            <value>yarn</value>   </property></configuration>    

4> modify yarn-site.xml

[root@dev-157-132 hadoop]# vim yarn-site.xml<property>       <name>yarn.resourcemanager.resource-tracker.address</name>       <value>dev-157-132:8031</value>       <description></description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name>  <value>dev-157-132:8030</value> <description></description> </property>  <property>          <name>yarn.resourcemanager.scheduler.class</name>          <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>         <description></description>   </property>   <property>           <name>yarn.resourcemanager.address</name>           <value>dev-157-132:8032</value>           <description>the host is the hostname of the ResourceManager and the port is the port on             which the clients can talk to the Resource Manager. </description>   </property>
5> modify hdfs-site.xml

 [root@dev-157-132 hadoop]# vim hdfs-site.xml  <property>  <name>dfs.namenode.name.dir</name>   <value>file:/export/servers/hadoop-2.2.0/data/nn</value></property><property>  <name>dfs.datanode.data.dir</name>    <value>file:/export/servers/hadoop-2.2.0/data/dfs</value></property><property>  <name>dfs.permissions</name>   <value>false</value></property>


6> modify slaves

 

[root@dev-157-132 hadoop]# vim slaves                dev-157-133                dev-157-134

7> scp to Server Load balancer

 scp -r hadoop-2.2.0  root@192.168.157.133:/export/servers scp -r hadoop-2.2.0  root@192.168.157.134:/export/servers
 

8> set three environment variables

 [root@dev-157-132 hadoop]# vim /etc/profileexport HADOOP_HOME=/export/servers/hadoop-2.2.0export HADOOP_CONF_DIR=/export/servers/hadoop-2.2.0/etc/hadoopexport PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@dev-157-132 hadoop]# source /etc/profile

9> Format hdfs in master

hadoop namenode -format

6. Start hadoop on the master

start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 14/12/23 11:18:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [dev-157-132] dev-157-132: starting namenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-namenode-dev-157-132.out dev-157-134: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-134.out dev-157-133: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-133.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-dev-157-132.out 14/12/23 11:18:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-resourcemanager-dev-157-132.out dev-157-134: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-134.out dev-157-133: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-133.out[root@dev-157-132 hbase-0.94.18-security]# jps 8100 NameNode 8973 Jps 8269 SecondaryNameNode 8416 ResourceManager

7. Install hbase

 [root@dev-157-132 servers]# tar -zxvf hbase-0.94.18-security.tar.gz   [root@dev-157-132 servers]# cd hbase-0.94.18-security/conf/

1> modify configuration

[Root @ dev-157-132 conf] # vim hbase-env.sh export JAVA_HOME =/export/servers/jdk1.6.0 _ 25 configure memory size export HBASE_MASTER_OPTS = "-Xms512m-Xmx512m $ HBASE_MASTER_OPTS" export connector = "$ HBASE_JMX_BASE-Dcom. sun. management. jmxremote. port = 10102-Xmn256m-Xms256m-Xmx256m-XX: export vorratio = 4 "export HBASE_MANAGES_ZK = false [root @ dev-157-132 conf] # vim hbase-site.xml <configuration> <property> <name> hbase. rootdir </name> <value> hdfs: // dev-157-132: 9100/hbase </value> </property> <name> hbase. cluster. distributed </name> <value> true </value> </property> <name> hbase. tmp. dir </name> <value>/export/servers/hbase-0.94.18-security/data/tmp </value> </property> <name> hbase. zookeeper. quorum </name> <value> ip, ip </value> </property> <name> hbase. zookeeper. property. clientPort </name> <value> 2181 </value> </property> <name> hbase. regionserver. handler. count </name> <value> 30 </value> </property> </configuration> [root @ dev-157-132 conf] # vim regionservers dev-157-133 dev-157-134

2> cp to other machines

Replace the jar related to hbase lib with the jar of the hadoop version installed before scp

 [root@dev-157-132 servers]# scp -r hbase-0.94.18-security root@192.168.157.132:/export/servers/ [root@dev-157-132 servers]# scp -r hbase-0.94.18-security root@192.168.157.134:/export/servers/

3> set hbase_home (three)

 

[root@dev-157-132 hadoop]# vim /etc/profile export HBASE_HOME=/export/servers/hbase-0.94.18-security

8. Start hbase

[root@dev-157-132 servers]# ./hbase-0.94.18-security/bin/start-hbase.shstarting master, logging to /export/servers/hbase-0.94.18-security/logs/hbase-root-master-dev-157-132.outdev-157-134: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-134.outdev-157-133: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-133.out

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.