Hadoop設定檔與HBase設定檔

來源:互聯網
上載者:User

本Hadoop與HBase叢集有1台NameNode, 7台DataNode

1. /etc/hostname檔案

 NameNode:

       node1

DataNode 1:

       node2

DataNode 2:

       node3

.......

DataNode 7:

       node8

2. /etc/hosts檔案

NameNode:

         

127.0.0.1localhost#127.0.1.1node1#-------edit by HY(2014-05-04)--------#127.0.1.1node1125.216.241.113 node1125.216.241.112 node2125.216.241.96 node3125.216.241.111 node4125.216.241.114 node5125.216.241.115 node6125.216.241.116 node7125.216.241.117 node8#-------end edit--------# The following lines are desirable for IPv6 capable hosts::1     ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters

DataNode 1:

127.0.0.1localhost#127.0.0.1node2#127.0.1.1node2#--------eidt by HY(2014-05-04)--------125.216.241.113 node1125.216.241.112 node2125.216.241.96 node3125.216.241.111 node4125.216.241.114 node5125.216.241.115 node6125.216.241.116 node7125.216.241.117 node8#-------end eidt---------# The following lines are desirable for IPv6 capable hosts::1     ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters

其他的DataNode類似,只是注意要保持hostname與hosts中的網域名稱要一樣, 如果不一樣, 在叢集上跑任務時會出一些莫名奇妙的問題, 具體什麼問題忘記了.

3. 在hadoop-env.sh中注釋

# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

增加

JAVA_HOME=/usr/lib/jvm/java-6-sun

4. core-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>   <name>fs.default.name</name>    <value>hdfs://node1:49000</value>  </property>  <property>    <name>hadoop.tmp.dir</name>   <value>/home/hadoop/newdata/hadoop-1.2.1/tmp</value>  </property> <property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value></property><property><name>io.compression.codec.lzo.class</name><value>com.hadoop.compression.lzo.LzoCodec</value></property>   <property>     <name>dfs.datanode.socket.write.timeout</name>     <value>3000000</value>   </property>    <property>     <name>dfs.socket.timeout</name>     <value>3000000</value>   </property></configuration>

5. hdfs-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>  <name>dfs.name.dir</name>  <value>/home/hadoop/newdata/hadoop-1.2.1/name1,/home/hadoop/newdata/hadoop-1.2.1/name2</value><description>資料元資訊儲存位置</description>  </property>  <property>  <name>dfs.data.dir</name>  <value>/home/hadoop/newdata/hadoop-1.2.1/data1,/home/hadoop/newdata/hadoop-1.2.1/data2</value>  <description>資料區塊儲存位置</description>  </property>  <property>    <name>dfs.replication</name>    <!-- 這裡備份兩份 -->    <value>2</value>  </property>  </configuration>

6. mapred-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property>    <name>mapred.job.tracker</name>    <value>node1:49001</value>  </property>  <property>    <name>mapred.local.dir</name>   <value>/home/hadoop/newdata/hadoop-1.2.1/tmp</value>  </property><property><name>mapred.compress.map.output</name><value>true</value><!-- map 和 reduce 輸出中間檔案預設開啟壓縮 --></property><property><name>mapred.map.output.compression.codec</name><value>com.hadoop.compression.lzo.LzoCodec</value><!-- 使用 Lzo 庫作為壓縮演算法 --></property></configuration>


7. masters

node1

8. slaves

node2node3node4node5node6node7node8


9. 在hbase-env.sh

增加

JAVA_HOME=/usr/lib/jvm/java-6-sun

並啟用export HBASE_MANAGES_ZK=true //為true表示使用內建的Zookeeper, 如果需要獨立的Zookeeper,則設定為false, 並且安裝Zookeeper

10. hbase-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements.  See the NOTICE file * distributed with this work for additional information * regarding copyright ownership.  The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */--><configuration>    <property>        <name>hbase.rootdir</name>        <value>hdfs://node1:49000/hbase</value>        <description>The directory shared by RegionServers.</description>    </property>    <property>        <name>hbase.cluster.distributed</name>        <value>true</value>        <description>The mode the cluster will be in. Possible values are            false: standalone and pseudo-distributed setups with managed Zookeeper            true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)        </description>    </property>    <property>        <name>hbase.master</name>        <value>node1:60000</value>        <description>        </description>    </property>    <property>        <name>hbase.tmp.dir</name>        <value>/home/hadoop/newdata/hbase/tmp</value>        <description>            Temporary directory on the local filesystem.            Change this setting to point to a location more permanent than '/tmp',            the usual resolve for java.io.tmpdir,            as the '/tmp' directory is cleared on machine restart.            Default: ${java.io.tmpdir}/hbase-${user.name}        </description>    </property>    <property>        <name>hbase.zookeeper.quorum</name>        <value>node2,node3,node4,node5,node6,node7,node8</value>        <description>            要單數台,Comma separated list of servers in the ZooKeeper ensemble (This config.            should have been named hbase.zookeeper.ensemble).            For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".            By default this is set to localhost for local and pseudo-distributed            modes of operation.            For a fully-distributed setup,            this should be set to a full list of ZooKeeper ensemble servers.            If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers            which hbase will start/stop ZooKeeper on as part of cluster start/stop.            Client-side, we will take this list of ensemble members and put it            together with the hbase.zookeeper.clientPort config.            and pass it into zookeeper constructor as the connectString parameter.            Default: localhost        </description>    </property>    <property>        <name>hbase.zookeeper.property.dataDir</name>        <value>/home/hadoop/newdata/zookeeper</value>        <description>            Property from ZooKeeper's config zoo.cfg.            The directory where the snapshot is stored.            Default: ${hbase.tmp.dir}/zookeeper        </description>    </property>    <property>        <name></name>        <value></value>    </property></configuration>


11. regionservers

node2node3node4node5node6node7node8


每台機器配置都要一樣

相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.