[Linux] [Hadoop] runs Hadoop.

Source: Internet
Author: User
Tags uuid

The previous installation process to be supplemented, after the installation complete Hadoop installation, began to execute the relevant commands, let Hadoop run up

Use the command to start all services:

[Email protected]:/usr/local/gz/hadoop-2.4. 1$./sbin/start-all. SH

Of course there will be a lot of startup files under directory Hadoop-2.4.1/sbin:

There will be a service to start the command, and start-all.sh is to start all services together, the following is the contents of. SH:

#!/usr/bin/Envbash# Licensed to the Apache software Foundation (ASF) under one or More# contributor license agreements. See the NOTICEfiledistributed with# this work foradditional information regarding copyright ownership.# the ASF licenses thisfileTo you under the Apache License, Version2.0# ( the"License"); You are not the use of thisfileExceptinchCompliance with# the License. Obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0# # unless required by applicable law or agreed toinchwriting, software# distributed under the License is distributed in an" as is"basis,# without warranties or CONDITIONS of any KIND, either express or implied.# see the License forThe specific language governing permissions and# limitations under the license.# Start all Hadoop daemons. Run this on master node.Echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh" #这里说明了这个脚本已经被弃用了, we need to start with start-dfs.sh and start-yarn.sh. 
   bin  = ' 
#真正执行的是以下两个, that is, the execution of start-dfs.sh and start-yarn.sh two scripts, and then the separate execution of these two commands
# start HDFs daemonsifHDFs is presentif[-F"${hadoop_hdfs_home}"/sbin/start-dfs.SH]; Then  "${hadoop_hdfs_home}"/sbin/start-dfs.SH--config $HADOOP _conf_dirfi# start Yarn daemonsifyarn is presentif[-F"${hadoop_yarn_home}"/sbin/start-yarn.SH]; Then  "${hadoop_yarn_home}"/sbin/start-yarn.SH--config $HADOOP _conf_dirfi

Call JPS after execution to see if all the services have started up:

Note here, must have 6 services, I started when there were only 5 services, open two connections are successful http://192.168.1.107:50070/,http://192.168.1.107:8088/, However, when executing the wordcount example, I found that the execution failed and the reason was not initiated when I started the Datanode service.

Here is the operation of the application:

The following is the health status of Dfs:

http://192.168.1.107:50070/open page to view Hadoop start and run logs, details as follows:

I just found the cause of the problem through the log here, after opening the log is the log file that each service runs:

When you open the Datanode-ubuntu.log file, there are related exceptions to throw:

2014-07-21 22:05:21,064 INFO Org.apache.hadoop.hdfs.server.common.Storage:Lock on/usr/local/gz/hadoop-2.4.1/dfs/ Data/in_use.lock acquired by nodename [email protected]2014-07-21 22:05:21,075 FATAL Org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed for Block pool <registering> (datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. Java.io.IOException:incompatible clusterids In/usr/local/gz/hadoop-2.4.1/dfs/data:namenode Clusterid = cid-2cfdb22e-07b2-4ab8-965d-fdb27645bd62; Datanode Clusterid = id-2cfdb22e-07b2-4ab8-965d-fdb27645bd62At Org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (datastorage.java:477) at Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:226) at Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:254) at Org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (datanode.java:974) at Org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (datanode.java:945) at Org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (bpofferservice.java:278) at Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (bpserviceactor.java:220) at Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (bpserviceactor.java:816) at Java.lang.Thread.run ( thread.java:722) 2014-07-21 22:05:21,084 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Ending block Pool Service For:block Pool <registering> (Datanode Uuid Unassigned) service to localhost/127.0.0.1:90002014-07-21 22:05:21,102 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:Removed Block pool <registering> (Datanode Uuid Unassigned) 2014-07-21 22:05:23,103 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:Exiting datanode2014-07-21 22:05:23,106 info org.apache.hadoop.util.ExitUtil:Exiting with status 02014-07-21 22:05:23,112 INFO Org.apache.hadoop.hdfs.server.datanode.DataNode:SHUTDOWN_MSG:/************************************************ Shutdown_msg:shutting down DataNode at ubuntu/127.0.1.1*********************************************** *************/
Based on the error log search, it was discovered that after Hadoop was started before, using the command to format Namenode would result in clusterid inconsistencies between Datanode and Namenode:
Locate the Hadoop/etc/hadoop/hdfs-site.xml configuration file in the Datanode and Namenode under the./current/version, file, compare two files in Clusterid, inconsistent, Change the Clusterid under Datanode to this ID under Namenode, and then restart.
Reference connection:
Http://www.cnblogs.com/kinglau/p/3796274.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.