Installation tutorials for Hadoop_hbase
Where the configuration files for Hadoop and HBase have a little to explain
/etc/hadoop/core-site.xml under Hadoop
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value >
</property>
And HBase under the Conf/hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</ Value>
</property>
Both addresses and port numbers must be the same, and if inconsistent, there will be serious errors.
If the configuration causes Hadoop to start but HBase does not start for the following reasons
If an HBase startup failure occurs, first open the/etc/hosts file to add a row
192.168.1.105 Datanode1
Computer IP Address name
Then open/etc/sysconfig/network add:
hostname=datanode1
Hostname= Name
Then change the/etc/hadoop/core-site.xml under Hadoop to:
<property>
<name>fs.default.name</name>
<value>hdfs://datanode1:9000</value >
</property>
And HBase under the conf/hbase-site.xml changed to:
<property>
<name>hbase.rootdir</name>
<value>hdfs://datanode1:9000/hbase</ Value>
</property>
If you want to start Hadoop with and HBase first to start Hadoop, run start-all.sh after running the JPS command if the Datanode and Namenode nodes appear, both must be at the same time to indicate that Hadoop HDFs started successfully, If NodeManager and ResourceManager are said to be successful in Hadoop yarn startup, Hadoop HDFs and Hadoop yarn must start successfully at the same time to indicate that Hadoop started successfully
If Datanode and Namenode have one without booting then Hadoop does not start and the subsequent hbase does not work
If the normal configuration after starting Hadoop in JPs found no datanode, the instructions did not start and then go to localhost:50070 logs link to view Hadoop-root-datanode-datanode1.log There was a warning and a failure.
2016-07-27 08:18:18,964 WARN org.apache.hadoop.hdfs.server.common.Storage:java.io.IOException:Incompatible Clusterids In/home/hadoop/hadoopinfra/hdfs/datanode:namenode Clusterid = cid-1c9ca58d-5e17-4349-91a8-850fb63c8349; Datanode Clusterid = cid-ebc2f079-e103-46cc-96d1-ca3ec64ad4b7 2016-07-27 08:18:18,965 FATAL Org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed for Block pool <registering> (datanode Uuid unassigned) service to datanode1/192.168.1.105:9000.
Exiting.
Java.io.IOException:All specified directories is failed to load. At Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:478) at Org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (datanode.java:1358) at Org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (datanode.java:1323) at Org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (bpofferservice.java:317) at Org.apache.hadoop.hdfs.server.datanode.BPServiceACtor.connecttonnandhandshake (bpserviceactor.java:223) at Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (bpserviceactor.java:802) at Java.lang.Thread.run ( thread.java:722) 2016-07-27 08:18:18,967 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Ending block Pool Service For:block Pool <registering> (Datanode Uuid Unassigned) service to datanode1/192.168.1.105:9000
Later on the Internet to find Datanode startup is not successful because the Datanode Clusterid and Namenode clusterid inconsistent build
The reason for this is that formatting namenode without deleting datanode will cause Datanode to be inconsistent with Nodenode version
Then we went to check the configuration file for Datanode and Namenode Hdfs-site.xml: The content is as follows
<configuration>
<property>
<name>dfs.replication</name >
<value>1< /value>
</property>
<property>
<name>dfs.name.dir</name>
<value >file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode< /value>
</property>
</configuration>
Then delete the Datanode:
Command: RM-RF /home/hadoop/hadoopinfra/hdfs/datanode
Then from the new format namenode: command Hadoop Namenode-format
And then start the Hadoop start-all.sh this time. JPS also see Datanode.
The start of HBase
Start HBase before starting zookeeper run zkserver.sh start and then start HBase
The./hbase shell appeared with an error error:can ' t get master address from Zookeeper; Znode data = = NULL
Later on the internet saw a solution
Delete temporary files in the Datanode node
Reformat Namenode
Restart Hadoop
Restart HBase
That's what I did, Rm-rf/home/hadoop/hadoopinfra/hdfs/datanode.
Hadoop Namenode-format
start-all.sh
Start Hadoop for the first time no datanode and then it came out again.
Cd/usr/local/hbase/bin
./start-hbase.sh
HBase starts successfully
And today I met a hbase error solution link: http://www.dataguru.cn/thread-453123-1-1.html
The main reason why HBase can't start is because Hadoop doesn't start well and zookeeper does not start if HBase starts before zookeeper, there will be a lot of errors, and Hadoop and zookeeper start before hbase starts, or it goes wrong.