hadoop 2.2、 hbase0.94.18 叢集安裝,hadoophbase0.94.18

來源:互聯網
上載者:User

hadoop 2.2、 hbase0.94.18 叢集安裝,hadoophbase0.94.18

        這裡我們安裝三台的叢集:叢集ip:192.168.157.132-134

            master:192.168.157.132

            slaves:192.168.157.133-134

             hbase 依賴zookeeper自行安裝。


1、配置ssh 無密碼 免登陸

     1>安裝ssh:132 、134、 133

       

 yum install openssh-clients

    2> 配置無密碼免登陸:132 、134、 133
  

      [root@localhost ~]# ssh-keygen -t rsa       一路斷行符號      [root@localhost ~]# cd /root/.ssh/      [root@localhost .ssh]# cat id_rsa.pub >> authorized_keys      [root@localhost .ssh]# scp authorized_keys root@192.168.157.132:/root/.ssh      [root@localhost .ssh]# scp authorized_keys root@192.168.157.133:/root/.ssh      [root@localhost .ssh]# scp authorized_keys root@192.168.157.134:/root/.ssh


     修改許可權三台 

      [root@localhost ~]# chmod 700 .ssh/      [root@localhost ~]# chmod 600 ~/.ssh/authorized_keys 



2、修改機器名稱

   1> 通過命令修改:重啟失效
   

     [root@localhost .ssh]# hostname dev-157-132     [root@localhost .ssh]# hostname      dev-157-132
   2> 修改設定檔:永久生效
   
  [root@localhost .ssh]# vim /etc/sysconfig/network    NETWORKING=yes    NETWORKING_IPV6=no    HOSTNAME=dev-157-132    GATEWAY=192.168.248.254

    3>三台先後經過兩個步驟修改即可:dev-157-132 dev-157-133  dev-157-134
   4>修改三台/etc/hosts
     
 [root@localhost .ssh]# vim /etc/hosts        127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4        ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6       192.168.157.132 dev-157-132       192.168.157.133 dev-157-133       192.168.157.134 dev-157-134


3、關閉防火牆(三台)

   

   service iptables stop


4、安裝JDK 略... 5、安裝hadoop2.2.0

    

 [root@dev-157-132 servers]# tar -xf hadoop-2.2.0.tar.gz [root@dev-157-132 servers]# cd hadoop-2.2.0/etc/hadoop
  1> 修改hadoop-env.sh
  
 [root@dev-157-132 hadoop]# vim hadoop-env.sh export JAVA_HOME=/export/servers/jdk1.6.0_25 (這java_home)     其他預設

   2> 修改core-site.xml

    

 [root@dev-157-132 hadoop]# vim core-site.xml     <configuration>          <property>      <name>fs.defaultFS</name>        <value>hdfs://dev-157-132:9100</value>      <description></description></property>        <property>       <name>hadoop.tmp.dir</name>          <value>/export/servers/hadoop-2.2.0/data/hadoop_tmp</value>        <description></description> </property> <property>     <name>io.native.lib.available</name>      <value>true</value>      <description></description> </property></configuration>

    3>修改 mapred-site.xml

      

   <configuration>   <property>           <name>mapreduce.framework.name</name>            <value>yarn</value>   </property></configuration>    

    4>修改yarn-site.xml

  

[root@dev-157-132 hadoop]# vim yarn-site.xml<property>       <name>yarn.resourcemanager.resource-tracker.address</name>       <value>dev-157-132:8031</value>       <description></description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name>  <value>dev-157-132:8030</value> <description></description> </property>  <property>          <name>yarn.resourcemanager.scheduler.class</name>          <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>         <description></description>   </property>   <property>           <name>yarn.resourcemanager.address</name>           <value>dev-157-132:8032</value>           <description>the host is the hostname of the ResourceManager and the port is the port on             which the clients can talk to the Resource Manager. </description>   </property>
  5>修改hdfs-site.xml

 [root@dev-157-132 hadoop]# vim hdfs-site.xml  <property>  <name>dfs.namenode.name.dir</name>   <value>file:/export/servers/hadoop-2.2.0/data/nn</value></property><property>  <name>dfs.datanode.data.dir</name>    <value>file:/export/servers/hadoop-2.2.0/data/dfs</value></property><property>  <name>dfs.permissions</name>   <value>false</value></property>


6>修改slaves

 

[root@dev-157-132 hadoop]# vim slaves                dev-157-133                dev-157-134

7> scp 到slave機器上

 

 scp -r hadoop-2.2.0  root@192.168.157.133:/export/servers scp -r hadoop-2.2.0  root@192.168.157.134:/export/servers
 

 8> 設定三台環境變數

 

 [root@dev-157-132 hadoop]# vim /etc/profileexport HADOOP_HOME=/export/servers/hadoop-2.2.0export HADOOP_CONF_DIR=/export/servers/hadoop-2.2.0/etc/hadoopexport PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@dev-157-132 hadoop]# source /etc/profile

 9> 格式化 hdfs 在master

    

hadoop namenode -format

6 、啟動hadoop 在 master

 

start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 14/12/23 11:18:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [dev-157-132] dev-157-132: starting namenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-namenode-dev-157-132.out dev-157-134: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-134.out dev-157-133: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-133.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-dev-157-132.out 14/12/23 11:18:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-resourcemanager-dev-157-132.out dev-157-134: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-134.out dev-157-133: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-133.out[root@dev-157-132 hbase-0.94.18-security]# jps 8100 NameNode 8973 Jps 8269 SecondaryNameNode 8416 ResourceManager

7、安裝hbase

 [root@dev-157-132 servers]# tar -zxvf hbase-0.94.18-security.tar.gz   [root@dev-157-132 servers]# cd hbase-0.94.18-security/conf/

1>修改配置

 [root@dev-157-132 conf]# vim hbase-env.sh    export JAVA_HOME=/export/servers/jdk1.6.0_25    配置記憶體大小     export HBASE_MASTER_OPTS="-Xms512m -Xmx512m $HBASE_MASTER_OPTS"     export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102 -Xmn256m -Xms256m -Xmx256m -XX:SurvivorRatio=4"    export HBASE_MANAGES_ZK=false    [root@dev-157-132 conf]# vim hbase-site.xml    <configuration><property>    <name>hbase.rootdir</name>    <value>hdfs://dev-157-132:9100/hbase</value></property><property>    <name>hbase.cluster.distributed</name>    <value>true</value></property><property>      <name>hbase.tmp.dir</name>      <value>/export/servers/hbase-0.94.18-security/data/tmp</value> </property><property>    <name>hbase.zookeeper.quorum</name>    <value>ip ,ip</value></property><property>    <name>hbase.zookeeper.property.clientPort</name>    <value>2181</value></property><property>        <name>hbase.regionserver.handler.count</name>        <value>30</value> </property></configuration> [root@dev-157-132 conf]# vim regionservers         dev-157-133        dev-157-134

 2> cp到其他機器

 scp 之前 替換 hbase lib相關的jar 為安裝的hadoop 版本的jar

 [root@dev-157-132 servers]# scp -r hbase-0.94.18-security root@192.168.157.132:/export/servers/ [root@dev-157-132 servers]# scp -r hbase-0.94.18-security root@192.168.157.134:/export/servers/

 3> 設定hbase_home(三台)

 

[root@dev-157-132 hadoop]# vim /etc/profile export HBASE_HOME=/export/servers/hbase-0.94.18-security

8、啟動hbase

[root@dev-157-132 servers]# ./hbase-0.94.18-security/bin/start-hbase.shstarting master, logging to /export/servers/hbase-0.94.18-security/logs/hbase-root-master-dev-157-132.outdev-157-134: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-134.outdev-157-133: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-133.out

相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.