Replace $hadoop_home/share/hadoop's new jar bundle under $hbase_home/lib to make the version consistent
CD $HBASE _home/lib
LS commons*
LS *hadoop*
Upgrade to:
/appl/hadoop-2.7.0/share/hadoop/common/lib/hadoop-annotations-2.7.0.jar
/appl/hadoop-2.7.0/share/hadoop/tools/lib/hadoop-auth-2.7.0.jar
/appl/hadoop-2.7.0/share/hadoop/common/hadoop-common-2.7.0.jar
/appl/hadoop-2.7.0/share/hadoop/hdfs/hadoop-hdfs-2.7.0.jar
/appl/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-*
/appl/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-*
e.g.
Cp/appl/hadoop-2.7.0/share/hadoop/common/lib/hadoop-annotations-2.7.0.jar $HBASE _home/lib
Cp/appl/hadoop-2.7.0/share/hadoop/tools/lib/hadoop-auth-2.7.0.jar $HBASE _home/lib
Cp/appl/hadoop-2.7.0/share/hadoop/common/hadoop-common-2.7.0.jar $HBASE _home/lib
Cp/appl/hadoop-2.7.0/share/hadoop/hdfs/hadoop-hdfs-2.7.0.jar $HBASE _home/lib
cp/appl/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-* $HBASE _home/lib
cp/appl/hadoop-2.7.0/share/hadoop/yarn/hadoop-yarn-* $HBASE _home/lib
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
<description>
False:standalone and pseudo-distributedsetups with managed zookeeper
True:fully-distributed with Unmanagedzookeeper Quorum (hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
<description>
Comma separated listof servers in the Zookeeper Quorum,this are the list of servers which we willstart/stop on.
</description>
</property>
Maintain consistency with the configuration in Hadoop
/appl/hadoop-2.7.0/etc/hadoop/core-site.xml
/appl/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
/appl/hadoop-2.7.0/etc/hadoop/slaves
VI regionservers
localhost
One line writes a host (just like the slaves in Hadoop). The server listed here will start as the cluster starts and the cluster stops.
Start Hadoop
SH start-dfs.sh
SH start-yarn.sh
Log:/appl/hadoop-2.7.0/logs
Verification: Http://192.168.56.250:8088/cluster
Running: After you start Hadoop, start-hbase.sh
JPS on Master (hmaster)
On the Slave JPS
/sbin/iptables-i input-p TCP--dport 60010-j ACCEPT
/etc/init.d/iptables Save
Service Iptables Restart
Browser View Port
node1:60010/master.jsp
View the HBase directory under HDFs
Http://centos1:50070/explorer.html#/hbase
Connection:./bin/hbase Shell
Build table: Create ' test ', ' CF '
Insert: Put ' test ', ' row1 ', ' cf:a ', ' value1 '
View: List ' table '
View: Scan ' test '
View: Get ' test ', ' row1 '
Delete Table: Disable ' test '; Drop ' test '
Disconnect: Exit
Close:./bin/stop-hbase.sh
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service