Hadoop0.20.203.0 configuration, see: http://www.cnblogs.com/flyoung2008/archive/2011/11/29/2268302.html
Completely distributed configuration of hadoop0.20.203.0 + hbase0.90.4 on the internet is rare. It took several days to complete the configuration. Now make a record.
I. Installation preparation
1. Download hbase0.90.4
2. hadoop has been installed by default.
Namenode 192.168.1.101 Host Name: centos1
Datanode 192.168.1.103 Host Name: centos2
Datanode 192.168.1.104 Host Name: centos3
Ii. Operation steps (namenode is used by default)
1. Decompress hbase0.90.4 in/home/Grid
tar -zxvf hbase-0.90.4.tar.gz
2. Modify the/home/GRID/hbase-0.90.4/CONF/hbase-env.sh File
export HBASE_OPTS="-ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
export JAVA_HOME=/usr/java/jdk1.6.0_29
export HBASE_MANAGES_ZK=true
export HBASE_HOME=/home/grid/hbase-0.90.4
export HADOOP_HOME=/home/grid/hadoop-0.20.203.0
3. Modify the/home/GRID/hbase-0.90.4/CONF/hbase-site.xml file with the following content:
Note:
1. First, pay attention to HDFS: // centos1: 9000/hbase,Must be associated with your hadoop cluster'sCore-site.xml file configuration stay exactly consistentIf your hadoop HDFS uses other ports, modify them here. In addition, hbaseThe Host IP address is not recognized. Only the host hostname can be used.If centos1 IP address (192.168.1.101) is used, a Java error is thrown.
2.Hbase. zookeeper. Quorum must be an odd number.
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://centos1:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>192.168.1.101:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.1.101,192.168.1.103,192.168.1.104</value>
</property>
</configuration>
4. Modify home/GRID/hbase-0.90.4/CONF/regionservers (same as the hadoop slaves file)
192.168.1.103
192.168.1.104
5. Distribute hbase-0.90.4 to other machines
scp -r hbase-0.90.4 centos2:/home/grid
scp -r hbase-0.90.4 centos3:/home/grid
6. Start hbase (prerequisite: hadoop has been started)
Note:
1. Because hbase is built on hadoop, he used hadoop. jar.lib. This jar is used by hadoop. Jar. hadoop using branch-0.20-append patch and hbase using hbase.Required.
So you needlibReplace hadoop. jar in the directory with the one in hadoop to prevent version conflicts. If you do not replace it, there will be version conflicts, which will lead to serious errors, and hadoop will look down.
The hadoop-core-0.20-append-r1056497.jar RM under hbase_home/lib (will load all jar files at startup), and then the hadoop-core-0.20.203.0.jar under CP hadoop_home to lib
16:57:06, 174 fatal org. Apache. hadoop. hbase. master. hmaster: unhandled exception. Starting shutdown.
Java. Io. ioexception: Call to centos 1/192.168.1.101: 9000 failed on local exception: Java. Io. eofexception
At org. Apache. hadoop. IPC. Client. wrapexception (client. Java: 775)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 743)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 220)
At $ proxy5.getprotocolversion (unknown source)
At org. Apache. hadoop. IPC. rpc. getproxy (rpc. Java: 359)
At org. Apache. hadoop. HDFS. dfsclient. createrpcnamenode (dfsclient. Java: 113)
At org. Apache. hadoop. HDFS. dfsclient. <init> (dfsclient. Java: 215)
At org. Apache. hadoop. HDFS. dfsclient. <init> (dfsclient. Java: 177)
At org. Apache. hadoop. HDFS. distributedfilesystem. initialize (distributedfilesystem. Java: 82)
At org. Apache. hadoop. fs. filesystem. createfilesystem (filesystem. Java: 1378)
At org. Apache. hadoop. fs. filesystem. Access $200 (filesystem. Java: 66)
At org. Apache. hadoop. fs. filesystem $ cache. Get (filesystem. Java: 1390)
At org. Apache. hadoop. fs. filesystem. Get (filesystem. Java: 196)
At org. Apache. hadoop. fs. Path. getfilesystem (path. Java: 175)
At org. Apache. hadoop. hbase. util. fsutils. getrootdir (fsutils. Java: 364)
At org. Apache. hadoop. hbase. master. masterfilesystem. <init> (masterfilesystem. Java: 81)
At org. Apache. hadoop. hbase. master. hmaster. finishinitialization (hmaster. Java: 346)
At org. Apache. hadoop. hbase. master. hmaster. Run (hmaster. Java: 282)
Caused by: Java. Io. eofexception
At java. Io. datainputstream. readint (datainputstream. Java: 375)
At org. Apache. hadoop. IPC. Client $ connection. receiveresponse (client. Java: 501)
At org. Apache. hadoop. IPC. Client $ connection. Run (client. Java: 446)
2. 16:57:06, 174 fatal org. Apache. hadoop. hbase. master. hmaster: unhandled exception. Starting shutdown.
Java. Lang. noclassdeffounderror: ORG/Apache/commons/configuration/Configuration
Now noclassdeffounderror is missing ORG/Apache/commons/configuration/Configuration
A commons-configuration-1.6.jar from CP under hadoop_home/lib to hbase_home/lib
1. Start hbase through shell script. Go to/home/GRID/hbase-0.90.4
bin/start-hbase.sh
Run the JPS command to check whether the following process indicates that the startup is successful. Otherwise, check the log for troubleshooting.
17481 jobtracker
17388 secondarynamenode
21698 hmaster
17221 namenode
21639 hquorumpeer
JPS 21846
2. Go to the/home/GRID/hbase-0.90.4/bin directory, execute the hbsae shell command, enter the hbase console, and display as follows.
[grid@centos1 conf]# hbase shellHBase Shell; enter 'help<RETURN>' for list of supported commands.Version: 0.20.5, r956266, Sat Jun 19 12:25:12 PDT 2010hbase(main):001:0>
3. Enter the LIST command on the hbase console. If the command is executed normally, hbase starts successfully. As follows:
hbase(main):001:0> list0 row(s) in 0.0610 secondshbase(main):002:0>
4. View hbase on the Web
| View master |
Http: // 192.168.1.101: 60010/master. jsp |
| View region Server |
Http: // 192.168.1.103: 60030/regionserver. jsp |
| View ZK tree |
Http: // 192.168.1.101: 60010/ZK. jsp |
Iii. Problems
1. regionserver startup failure caused by server time synchronization in hbase
Caused:
Org. Apache. hadoop. IPC. RemoteException:
Org. Apache. hadoop. hbase. clockoutofsyncexception: Server
S3, 60020,1301097875246 has been rejected; reported time is too far out
Of sync with Master. Time Difference of 41450 MS> MAX allowed
30000 ms
1. solution 1
Add configuration in hbase-site.xml
<Property>
<Name> hbase. master. maxclockskew </Name>
<Value> 180000 </value>
<Description> time difference of regionserver from Master </description>
</Property>
2. solution 2
The error indicates that the time difference between the node time and the master time is greater than 30000 ms, that is, the service cannot be started in 30 seconds.
Modify the time of each node so that the error is within 30 s.
NTP is required to configure Server time synchronization. It is best to connect to the time server on the Internet. It is troublesome to configure the Intranet, so you can manually change the time and start it again.
This is more practical:
Yum install NTP
Run
Ntpdate cn.pool.ntp.org
You can synchronize the international time ..
Automatic synchronization time after startup:
In VI/etc/rc. d/rc. Local, add
Ntpdate cn.pool.ntp.org
2. If region is not equal to 0, the regionserver can access the Web. If region is equal to 0, will it not work ?! (Unsolved)
Region servers
| |
Address |
Start code |
Load |
| Centos2: 60030 |
1322830078520 |
Requests = 0, regions = 2, usedheap = 29, maxheap = 993 |
| Centos3: 60030 |
1322830078818 |
Requests = 0, regions = 0, usedheap = 26, maxheap = 993 |
References:
Hbase official documentation: http://www.yankay.com/wp-content/hbase/book.html
Http://www.blogjava.net/ivanwan/archive/2011/01/21/343345.html
Http://www.cnblogs.com/ventlam/archive/2011/01/22/HBaseCluster.html
Http://liuskysun.blog.163.com/blog/static/9981297820117235326161/
Http://taoo.iteye.com/blog/1207460
Http://javoft.net/2011/09/hbase-hmaster-%E6%97%A0%E6%B3%95%E5%90%AF%E5%8A%A8-call-to-failed-on-local-exception/