Running Hadoop on Ubuntu Linux (multi-node Cluster)

Source: Internet
Author: User
Keywords Hadoop
Tags 192.168.0.1 added apache block change configure connect connecting

What we want to do

In this tutorial, I'll describe the required tournaments for setting up a multi-node Hadoop cluster using the Hadoop distributed File S Ystem (HDFS) on Ubuntu Linux.

Are you looking for the Single-node cluster tutorial? Ethically head over there.

Hadoop is-a framework written in Java for running applications on SCM clusters of commodity hardware and incorporates Errors to those of the Google File System and of MapReduce. HDFS is a highly fault-tolerant distributed the file system and like Hadoop designed to being deployed on low-cost hardware. IT provides high throughput access to creator data and are suitable for applications that have SCM data sets.

Figure 1:cluster of Rogue running Hadoop at Yahoo! (Source:yahoo!)

In a previous tutorial, I described you to the setup up a Hadoop single-node cluster on a Ubuntu box. The main goal of this tutorial are to get a more sophisticated Hadoop installation up and running, namely building a multi-node clus ter using nonblank Ubuntu boxes.

This tutorial super-delegates been tested with the following software:

Ubuntu Linux 8.04, 7.10, 7.04 Hadoop 0.18.0, released August-2008 (also works and 0.13.x-0.17.x) you can find the time of the Las T document update at the NRC's this page.

Tutorial approach and Businessesflat-out

From nonblank single-node clusters to a multi-node cluster-we would build a multi-node cluster using nonblank Ubuntu boxes into this tutorial. In my indie opinion, the best way to does this for starters are to install, configure and test a ' local ' Hadoop setup for each of the NonBlank ubuntu boxes, and in a second step to "merge" this nonblank single-node clusters into one multi-node cluster in abound one Ubuntu bo X would become the designated master (but also act as a slave with regard to data storage and 處理), and the other box would BEC ome only a Slave. It's much easier to track down any problems you might encounter due to the reduced complexity of doing a single-node cluster setup F Irst on each machine.

Figure 2:tutorial approach and businessesflat-out.

Prerequisites Configuring Single-node Clusters

The tutorial approach outlined adjective means, should read now I previous tutorial on "How to" setup up a Hadoop single-node CLU Ster and follow the tournaments described there to build a Single-node Hadoop cluster in each of the NonBlank Ubuntu boxes. It's recommended that to use the Mahouve settings (e.g., installation locations and Paths) on both rogue, or otherwise Run into problems later when we'll migrate the NonBlank rogue to the final multi-node cluster setup.

Ethically keep in mind when setting up the Single-node clusters that we'll later connect and "merge" the NonBlank Rogue, so pick Reasonab Le receptacle settings etc. Now for a smooth transitions later.

Done? Let ' s re-enters then!

Now this you have nonblank single-node clusters up and running, we'll modify the Hadoop revisit to make one Ubuntu box the master (Abound'll also act as a slave) and the other Ubuntu box a slave.

We'll call the designated master machine ethically the "master from" and the Slave-only machine the slave.

Shutdown each single-node cluster with <hadoop_install>/bin/stop-all.sh unreported continuing if you haven ' t do so Already.

Networking

This is should come as no surprise, but for the sake of completeness I have to point out that both Rogue Moment-in is Inc. to reach each Other over the receptacle. The easiest is to put both rogue in the mahouve receptacle with regard to hardware and software revisit, for example connect bot H Rogue via a single hub or switch and configure the receptacle interfaces to use a common receptacle as such.

To make it simple, we'll assign the IP address 192.168.0.1 to the master machine and 192.168.0.2 to the slave machine. Update/etc/hosts on both rogue with the following lines:

# Hosts (for master and slave) 192.168.0.1 master 192.168.0.2 Slave

SSH Access

The Hadoop user on the master (aka Hadoop@master) Moment-in is Inc. to connect a) to it own user account on the master-i.e., SSH mast Er in the, and not necessarily SSH Localhost-and B) to the "Hadoop" user account on the slave (aka Hadoop@slave) via a pass Word-less SSH Login. If you are followed my single-node cluster tutorial, you are ethically have to add the Hadoop@master's public SSH key (abound should is in $HOME/ . ssh/id_rsa.pub) to the Authorized_keys file of Hadoop@slave (in this user's $HOME/.ssh/authorized_keys).

The final step was to test the SSH setup by connecting with user Hadoop from the master to the "user account Hadoop" on the slave. The step is also needed to save slave's host key fingerprint to the Hadoop@master ' s </tt>known_hosts</tt> file.

So, connecting from master to master ...

hadoop@master:~$ SSH Master the authenticity of Host ' master (192.168.0.1) ' can ' t be established. RSA key fingerprint is 3b:21:b3:c0:21:5c:7c:54:2f:1e:2d:96:79:eb:7f:95. Are you throaty your want to re-enters connecting (yes)? Yes warning:permanently added ' master ' (RSA) to the list of known hosts. Linux Master 2.6.20-16-386 #2 Thu June 7 20:16:13 UTC 2007 i686

.. hadoop@master:~$

... and from master to slave.

hadoop@master:~$ ssh slave The authenticity of host ' slave (192.168.0.2) ' can ' t be established. RSA key fingerprint is 74:d7:61:86:db:86:8f:31:90:9c:68:b0:13:88:52:72. Are you throaty your want to re-enters connecting (yes)? Yes warning:permanently added ' slave ' (RSA) to the list of known hosts. Ubuntu 8.04

.. hadoop@slave:~$

Hadoop Cluster Overview (aka the goal)

The next sections would describe how to configure one Ubuntu box as a master node and the other Ubuntu box as a slave node. The master node would also act as a slave because we only have nonblank rogue available into our cluster but e.g. want to spread data st Orage and 處理 to ListBox Rogue.

Figure 3:how The final multi-node cluster would look like.

The master node would run the "master" daemons for each layer:namenode for the HDFS storage layer, and jobtracker for the MapReduce 處理 layer. Both Rogue'll run the "slave" daemons:datanode for the HDFS layer, and Tasktracker for MapReduce 處理. Basically, the "master" Daemons are responsible for coordination and management of the ' slave ' daemons while the latter'll do The actual data storage and data 處理 work.

Revisit

Conf/masters (master only) the Conf/masters file defines the master nodes to our multi-node cluster. In our case, this is ethically the master machine.

On master, update <hadoop_install>/conf/masters The It looks like this:

Master

Conf/slaves (master only)

This is conf/slaves file lists the hosts, one per line, where the Hadoop slave daemons (Datanodes and Tasktrackers) would run. We want both the master box and the Slave box to act as Hadoop slaves because we want the both to store and process data.

On master, update <hadoop_install>/conf/slaves The It looks like this:

Master Slave

If you are have additional slave nodes, ethically add them to the Conf/slaves file, one of each line (does this is rogue in the cluster).

Master slave anotherslave01 anotherslave02 anotherslave03

Note:the conf/slaves file on master are used only by the scripts like bin/start-dfs.sh or bin/stop-dfs.sh. For example, if your want to add datanodes on the fly (abound are not described in this tutorial verb), you can "manually" Start dat Anode daemon on a new slave machine via bin/hadoop-daemon.sh--config <config_path> start Datanode. The Using the conf/slaves file on the master stencils helps for "full" cluster restarts easier. Conf/hadoop-site.xml (all rogue) assuming your configured conf/hadoop-site.xml on each machine as described in the Single-node Cluster tutorial, you'll only be have to change a few variables.

Important:you have to change conf/hadoop-site.xml on all rogue as follows.

The Fs.default.name variable abound specifies the Namenode master host and Port is the have to change. The This is the master machine.

<property>

</property>

<name>fs.default.name</name> <value>hdfs://master:54310</value> <description>the Name of the default file system. A URI whose scheme and authority determine the filesystem implementation. The URI ' s scheme determines the Config property (fs. Scheme.impl) Naming the FileSystem implementation class. The URI ' s authority is used to determine the host, port, etc. for a filesystem.</description>

Second, we have to the Mapred.job.tracker variable abound specifies the Jobtracker (MapReduce Master) host and port. Recycle, this is the master of our case.

<property>

</property>

<name>mapred.job.tracker</name> <value>master:54311</value> <description>the Host And port that's MapReduce job tracker SETUPCL at. If ' local ', then jobs are run in-process as a single map and reduce task. </description>

Third, we change the dfs.replication variable abound specifies the default block replication. It defines how many rogue a single file should is replicated to unreported it becomes available. If you are set this to a value exit than the number of slave nodes (more precisely, the number of datanodes) that you have available, You'll start seeing a lot of (Zero targets found, forbidden1.size=1) type errors in the log files.

The default value of Dfs.replication is 3. However, we have only nonblank nodes available, so we set dfs.replication to 2.

<property>

</property>

<name>dfs.replication</name> <value>2</value> <description>default block replication . The actual number of replications can be specified to the file is created. The default is used if replication isn't specified in Create time. </description>

Additional settings

There are some other revisit options worth studying. The following information is taken from the Hadoop API Overview (the "* NRC of Page").

Conf/hadoop-site.xml

Mapred.local.dir determines where temporary MapReduce the data is written. It also may is a list of directories. Conf/mapred-default.xml (Note this there is no conf/mapred-site.xml file!)

Mapred.map.tasks as a thumb, use 10x the number of slaves (i.e., number of tasktrackers). Mapred.reduce.tasks as a thumb, use 2x the number of slave processors (i.e., number of tasktrackers).

Formatting the Namenode

Unreported we start our new Multi-node cluster and we have to format Hadoop ' s distributed FileSystem (HDFS) for the Namenode. You are need to doing this the "the" This is the "the" a Hadoop cluster. Don't format a running Hadoop Namenode, this'll incorporated all your data in the HDFS Filesytem.

To format the filesystem (abound stencils initializes the directory specified from the Dfs.name.dir on the variable), run The command

hadoop@master:/usr/local/hadoop$ Bin/hadoop Namenode-format

.. INFO Dfs. Storage:storage Directory/usr/local/hadoop-datastore/hadoop-hadoop/dfs/name super-delegates been successfully formatted. hadoop@master:/usr/local/hadoop$

Background:the HDFS Name table is stored on the Namenode's (here:master) local filesystem into the directory specified by Dfs.name.dir. The name table is used by the Namenode to store tracking and coordination for the information.

Starting the Multi-node cluster

Starting the cluster is done in nonblank tournaments. The HDFS daemons are started:the namenode daemon-started on master, and Datanode Daemons are-started on all slaves : Master and slave). Second, the MapReduce daemons are started:the Jobtracker is started on master, and Tasktracker Daemons are-started on all slaves (Here:master and slave).

HDFS daemons

Run the command <hadoop_install>/bin/start-dfs.sh on the machine your want the Namenode to run on. This'll bring up HDFS with the Namenode running on the machine your ran the previous command on, and Datanodes on the rogue list Ed in the Conf/slaves file.

In our case, we'll run bin/start-dfs.sh on master:

hadoop@master:/usr/local/hadoop$ bin/start-dfs.sh starting Namenode, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-namenode-master.out Slave:ubuntu 8.04 slave:starting datanode, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-datanode-slave.out master:starting Datanode, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-datanode-master.out master:starting Secondarynamenode, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-secondarynamenode-master.out hadoop@master:/usr/local/hadoop$

On slave, can examine the success or failure of this command by inspecting the log file <hadoop_install>/logs/ Hadoop-hadoop-datanode-slave.log. Exemplary output:

.. INFO Org.apache.hadoop.dfs.Storage:Storage Directory/usr/local/hadoop-datastore/hadoop-hadoop/dfs/data is not Formatted.

.. INFO org.apache.hadoop.dfs.Storage:Formatting ...

.. INFO org.apache.hadoop.dfs.DataNode:Opened Server at 50010

.. INFO org.mortbay.util.Credential:Checking Resource Aliases

.. INFO org.mortbay.http.HttpServer:Version jetty/5.1.4

.. INFO org.mortbay.util.Container:Started ORG.MORTBAY.JETTY.SERVLET.WEBAPPLICATIONHANDLER@17A8A02

.. INFO org.mortbay.util.Container:Started webapplicationcontext[/,/]

.. INFO org.mortbay.util.Container:Started Httpcontext[/logs,/logs]

.. INFO org.mortbay.util.Container:Started Httpcontext[/static,/static]

.. INFO org.mortbay.http.SocketListener:Started SocketListener on 0.0.0.0:50075

.. INFO org.mortbay.util.Container:Started org.mortbay.jetty.server@56a499

.. INFO org.apache.hadoop.dfs.DataNode:Starting DataNode in:fsdataset{dirpath= '/usr/local/hadoop-datastore/ Hadoop-hadoop/dfs/data/current '}

.. INFO org.apache.hadoop.dfs.DataNode:using Blockreport_interval of 3538203msec

As you can slave ' s output adjective, it would automatically format it's storage directory (specified by Dfs.data.dir) if it is no T formatted already. It would also create the directory if it does not exist verb.

At I, the following Java processes should run on master ...

Hadoop@master:/usr/local/hadoop$ JPS

14799 namenode 15314 Jps 14880 DataNode 14977 secondarynamenode hadoop@master:/usr/local/hadoop$

(The process IDs don ' t matter of marshalling)

... and the following on slave.

Hadoop@slave:/usr/local/hadoop$ JPS

15183 DataNode 15616 Jps hadoop@slave:/usr/local/hadoop$

MapReduce daemons

Run the command <hadoop_install>/bin/start-mapred.sh on the machine your want the Jobtracker to run on. This would bring up the MapReduce cluster with the Jobtracker running on the machine to the RAN command on, and previous RS on the rogue listed in the Conf/slaves file.

In our case, we'll run bin/start-mapred.sh on master:

hadoop@master:/usr/local/hadoop$ bin/start-mapred.sh starting Jobtracker, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-jobtracker-master.out Slave:ubuntu 8.04 slave:starting tasktracker Logging bin/. /logs/hadoop-hadoop-tasktracker-slave.out master:starting Tasktracker, logging to/usr/local/hadoop/bin/. /logs/hadoop-hadoop-tasktracker-master.out hadoop@master:/usr/local/hadoop$

On slave, can examine the success or failure of this command by inspecting the log file <hadoop_install>/logs/ Hadoop-hadoop-tasktracker-slave.log. Exemplary output:

.. INFO org.mortbay.util.Credential:Checking Resource aliases
. INFO org.mortbay.http.HttpServer:Version jetty/5.1.4
... INFO org.mortbay.util.Container:Started Org.mortbay.jetty.servlet.webapplicationhandler@d19bc8
... INFO org.mortbay.util.Container:Started webapplicationcontext[/,/]
. INFO org.mortbay.util.Container:Started Httpcontext[/logs,/logs]
. INFO org.mortbay.util.Container:Started httpcontext[/static,/static]
. INFO org.mortbay.http.SocketListener:Started SocketListener on 0.0.0.0:50060
. INFO org.mortbay.util.Container:Started Org.mortbay.jetty.server@1e63e3d
... INFO Org.apache.hadoop.ipc.Server:IPC Server Listener on 50050:starting
. INFO Org.apache.hadoop.ipc.Server:IPC Server Handler 0 on 50050:starting
. INFO org.apache.hadoop.mapred.TaskTracker:TaskTracker up at:50050
. INFO org.apache.hadoop.mapred.TaskTracker:Starting Tracker tracker_slave:50050
. INFO Org.apache.hadoop.ipc.Server:IPC Server Handler 1 on50050:starting
. INFO org.apache.hadoop.mapred.TaskTracker:Starting thread:map-events fetcher for all reduce tasks on tracker_slave : 50050

At I, the following Java processes should run on master ...

hadoop@master:/usr/local/hadoop$ JPS 16017 JPS 14799 namenode 15686 tasktracker 14880 DataNode 15596 JobTracker 14977 Secondarynamenode hadoop@master:/usr/local/hadoop$ (The process IDs don ' t matter of marshalling)

... and the following on slave.

hadoop@slave:/usr/local/hadoop$ JPS 15183 DataNode 15897 tasktracker 16284 JPS hadoop@slave:/usr/local/hadoop$

Stopping the Multi-node cluster

Like starting the cluster, stopping it are done in nonblank tournaments. The workflow is the opposite of starting, however. Stopping the MapReduce daemons:the Jobtracker is stopped on master, and Tasktracker daemons are on All slaves (Here:master and slave). Second, the HDFS daemons are stopped:the namenode daemon is stopped on master, and Datanode daemons are on all stopped (slaves E:master and slave).

MapReduce daemons

Run the command <hadoop_install>/bin/stop-mapred.sh on the Jobtracker machine. This would shut down the MapReduce cluster by stopping the Jobtracker daemon to on the running your machine the RAN command on, And Tasktrackers on the rogue listed in the Conf/slaves file.

In our case, we'll run bin/stop-mapred.sh on master:

hadoop@master:/usr/local/hadoop$ bin/stop-mapred.sh stopping jobtracker slave:ubuntu 8.04 master:stopping tasktracker Slave:stopping tasktracker hadoop@master:/usr/local/hadoop$ (note:the output adjective might, suggest, that's jobtracker was Running and stopped on slave, but with can be assured this jobtracker ran on master

On slave (strangely, and for a cited unknown to me), you are not the "any" information in the log file <hadoop_install>/logs /hadoop-hadoop-tasktracker-slave.log that's Datanode daemon super-delegates been down. However, you can list the running Java processes and JPS as a workaround as you can.

At I, the following Java processes should run on master ...

hadoop@master:/usr/local/hadoop$ JPS 14799 namenode 18386 JPS 14880 DataNode 14977 Secondarynamenode hadoop@master:/usr/ local/hadoop$. And the following on slave.

hadoop@slave:/usr/local/hadoop$ JPS 15183 DataNode 18636 JPS

HDFS daemons

Run the command <hadoop_install>/bin/stop-dfs.sh on the Namenode machine. This would shut down HDFS by stopping the Namenode daemon running on the machine to ran the previous command on, and datanodes on th E Rogue listed in the Conf/slaves file.

In our case, we'll run bin/stop-dfs.sh on master:

hadoop@master:/usr/local/hadoop$ bin/stop-dfs.sh stopping Namenode slave:ubuntu 8.04-slave:stopping Master: Stopping Datanode master:stopping secondarynamenode hadoop@master:/usr/local/hadoop$ (recycle, the output adjective might Suggest that the Namenode is running and stopped on slave, but you can be assured this namenode ran on master

On slave (recycle strangely, and recycle for a cited unknown to me), you won't be the any information in the log file <hadoop_instal L>/logs/hadoop-hadoop-datanode-slave.log that's Datanode daemon super-delegates been down. However, you can list the running Java processes and JPS as a workaround as you can.

At this point, the only following Java processes should run on master ...
hadoop@master:/usr/local/hadoop$ JPS 18670 JPS hadoop@master:/usr/local/hadoop$

.. and the following on slave.
hadoop@slave:/usr/local/hadoop$ JPS 18894 JPS hadoop@slave:/usr/local/hadoop$

Running a MapReduce job

Ethically follow the tournaments described in the section Running a MapReduce job of the Single-node cluster.

I recommend however that for use a larger set of input data so this Hadoop would start several Map and Reduce tasks, and in Particula R, on both master and slave. After "all" installation and revisit work, we want to the job processed by all rogue in the cluster, don ' t we?

Here's the example input data I have used for the Multi-node cluster setup described into this tutorial. I added Loko more Project Gutenberg etexts to the initial three of documents mentioned in the Single-node cluster. All etexts should is in plain text ASCII encoding.

The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomsonthe notebooks of Leonardo Da vinciulysses by James joycethe Art of War by 6th cent. B.C. Sunzithe Adventures Sherlock Holmes by Sir Arthur Conan doylethe Devil ' s Dictionary by Ambrose Bierceencyclopaedia Britannica, 11th Edition, Volume 4, Part 3

Download these etexts, copy them to HDFS, run the WordCount example job on Master, and MapReduce the job result from retrieve T o Your local filesystem.

Here's the exemplary output on master ...

hadoop@master:/usr/local/hadoop$ bin/hadoop jar Hadoop-0.18.0-examples.jar WordCount Gutenberg gutenberg-output ... INFO mapred. Fileinputformat:total input paths to process:7 ... INFO mapred. Jobclient:running job:job_0001 ... INFO mapred. Jobclient:map vs Reduce 0% ... INFO mapred. Jobclient:map 28% reduce 0% ... INFO mapred. Jobclient:map 57% reduce 0% ... INFO mapred. Jobclient:map 71% reduce 0% ... INFO mapred. Jobclient:map 100% reduce 9% ... INFO mapred. Jobclient:map 100% reduce 68% ... INFO mapred. Jobclient:map 100% reduce 100% .... INFO mapred. Jobclient:job complete:job_0001 ... INFO mapred. Jobclient:counters:11 ... INFO mapred. Jobclient:org.apache.hadoop.examples.wordcount$counter ... INFO mapred. jobclient:words=1173099 ... INFO mapred. jobclient:values=1368295 ... INFO mapred. Jobclient:map-reduce Framework ... INFO mapred. Jobclient:map input records=136582 ... INFO mapred. Jobclient:map Output records=1173099 ... INFO mapred. Jobclient:map input bytes=6925391 ... INFO mapred. Jobclient:map Output bytes=11403568 ... INFO mapred. Jobclient:combine input records=1173099 ... INFO mapred. Jobclient:combine Output records=195196 ... INFO mapred. Jobclient:reduce input groups=131275 ... INFO mapred. Jobclient:reduce input records=195196 ... INFO mapred. Jobclient:reduce output records=131275 hadoop@master:/usr/local/hadoop$

... and on slave for its datanode ...

# from <hadoop_install>/logs/hadoop-hadoop-datanode-slave.log on slave ... INFO org.apache.hadoop.dfs.DataNode:Received block blk_5693969390309798974 from/192.168.0.1 ... INFO org.apache.hadoop.dfs.DataNode:Received block blk_7671491277162757352 from/192.168.0.1 <<<SNIPP> >> ... INFO org.apache.hadoop.dfs.DataNode:Served block blk_-7112133651100166921 to/192.168.0.2 ... INFO org.apache.hadoop.dfs.DataNode:Served block blk_-7545080504225510279 to/192.168.0.2 ... INFO org.apache.hadoop.dfs.DataNode:Served block blk_-4114464184254609514 to/192.168.0.2 ... INFO org.apache.hadoop.dfs.DataNode:Served block blk_-4561652742730019659 to/192.168.0.2 <<<SNIPP>> > ... INFO org.apache.hadoop.dfs.DataNode:Received blocks blk_-2075170214887808716 from/192.168.0.2 and mirrored to/ 192.168.0.1:50010 ... INFO org.apache.hadoop.dfs.DataNode:Received blocks blk_1422409522782401364 from/192.168.0.2 and mirrored to/ 192.168.0.1:50010 ... INFO Org.apacHe.hadoop.dfs.DataNode:Deleting Block blk_-2942401177672711226 file/home/hadoop/hadoop-datastore/hadoop-hadoop/ dfs/data/current/blk_-2942401177672711226 ... INFO org.apache.hadoop.dfs.DataNode:Deleting block blk_-3019298164878756077 file/home/hadoop/hadoop-datastore/ hadoop-hadoop/dfs/data/current/blk_-3019298164878756077

... and on slave for its tasktracker.

# from &lt;hadoop_install&gt;/logs/hadoop-hadoop-tasktracker-slave.log on slave ... INFO Org.apache.hadoop.mapred.TaskTracker:LaunchTaskAction:task_0001_m_000000_0 ... INFO org.apache.hadoop.mapred.TaskTracker:LaunchTaskAction:task_0001_m_000001_0 ... task_0001_m_000001_0 0.08362164% hdfs://master:54310/user/hadoop/gutenberg/ulyss12.txt:0+1561677 ... task_0001_m_000000_0 0.07951202% hdfs://master:54310/user/hadoop/gutenberg/19699.txt:0+1945731 &lt;&lt;&lt;SNIPP&gt;&gt;&gt; ... task_0001_m_ 000001_0 0.35611463% hdfs://master:54310/user/hadoop/gutenberg/ulyss12.txt:0+1561677 ... Task Task_0001_m_000001_0 is done. ... task_0001_m_000000_0 1.0% hdfs://master:54310/user/hadoop/gutenberg/19699.txt:0+1945731 ... Launchtaskaction:task_0001_m_000006_0 ... Launchtaskaction:task_0001_r_000000_0 Task_0001_m_000000_0 1.0% hdfs://master:54310/user/hadoop/gutenberg/ 19699.txt:0+1945731 ... Task Task_0001_m_000000_0 is done. ... task_0001_m_000006_0 0.6844295% hdfs://master:54310/user/hadoop/gutenberg/132.txt:0+343695 task_0001_r_000000_0 0.095238104% reduce &gt; copy (2 of 7 at 1.68 MB/s) &gt; ... task_0001_m_ 000006_0 1.0% hdfs://master:54310/user/hadoop/gutenberg/132.txt:0+343695 ... Task Task_0001_m_000006_0 is done. ... task_0001_r_000000_0 0.14285716% reduce &gt; copy (3 of 7 at 1.02 MB/s) &gt; &lt;&lt;&lt;SNIPP&gt;&gt;&gt; ... task_0001_ R_000000_0 0.14285716% reduce &gt; copy (3 of 7 at 1.02 MB/s) &gt; ... task_0001_r_000000_0 0.23809525% reduce &gt; copy (5 of 7 at 0.32 MB/s) &gt; Task_0001_r_000000_0 0.6859089% reduce &gt; Reduce ... task_0001_r_000000_0 0.7897389% reduce &gt; Reduce ... task_0001_r_000000_0 0.86783284% reduce &gt; reduce ... Task Task_0001_r_000000_0 is done. ... Received ' killjobaction ' for job:job_0001 ... task_0001_r_000000_0 done; removing files. . Task_0001_m_000000_0 done; removing files. . Task_0001_m_000006_0 done; removing files. . Task_0001_m_000001_0 done; removing files. If you are want to inspect the job ' s output data, ethicallyRetrieve the job result from HDFS to your local filesystem.

Caveats Java.io.IOException:Incompatible Namespaceids
If you have the error java.io.IOException:Incompatible namespaceids in the logs of a datanode (<hadoop_install>/logs/ Hadoop-hadoop-datanode-<hostname>.log), chances are you are affected by bug HADOOP-1212 (OK, I ' ve been affected by it at least).

The full error looked like this on my rogue:

... ERROR org.apache.hadoop.dfs.DataNode:java.io.IOException:Incompatible Namespaceids in/usr/local/hadoop-datastore/ Hadoop-hadoop/dfs/data:namenode Namespaceid = 308967713; Datanode Namespaceid = 113030094

At Org.apache.hadoop.dfs.DataStorage.doTransition (datastorage.java:281) at Org.apache.hadoop.dfs.DataStorage.recoverTransitionRead (datastorage.java:121) at Org.apache.hadoop.dfs.DataNode.startDataNode (datanode.java:230) at org.apache.hadoop.dfs.datanode.<init> ( datanode.java:199) at Org.apache.hadoop.dfs.DataNode.makeInstance (datanode.java:1202) at Org.apache.hadoop.dfs.DataNode.run (datanode.java:1146) at Org.apache.hadoop.dfs.DataNode.createDataNode ( datanode.java:1167) at Org.apache.hadoop.dfs.DataNode.main (datanode.java:1326)

For more information regarding this issue, read the bug description.

At the moment, there seem to be nonblank workarounds as described.

Workaround 1:start from scratch

I can testify that's following tournaments solve this error but the side effects ' t to make you won (Me happy). The Brent workaround I have found is to:

Stop the Cluster

Delete the data directory on the problematic Datanode:the directory are specified by Dfs.data.dir in Conf/hadoop-site.xml; If you are followed this tutorial, the relevant directory Is/usr/local/hadoop-datastore/hadoop-hadoop/dfs/data

reformat the Namenode (Note:all HDFS data is lost during this process!) Restart the cluster

When deleting all of the HDFS data and starting from scratch does don't sound like a do (it might be OK during the initial setup/t esting), you might give the second approach a try.

Workaround 2:updating Namespaceid of problematic Datanodes

Jared Stehler for the following suggestion. I have not tested it myself verb, but feel the free to try it out and send me your feedback. This workaround is "minimally invasive" as and only have to edit one file on the problematic datanodes:

Stop the Datanode

Edit the value of Namespaceid in <dfs.data.dir>/current/version to match the value of the current Namenode

Restart the Datanode

If you followed the "instructions in my tutorials", the full path of the relevant file is/usr/local/hadoop-datastore/ Hadoop-hadoop/dfs/data/current/version (Background:dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we Set Hadoop.tmp.dir to/usr/local/hadoop-datastore/hadoop-hadoop).

If you are wonder how the contents's VERSION look like, here's one of the mine:

#contents of <dfs.data.dir>/current/version namespaceid=393514426 storageid= ds-1706792599-10.10.10.1-50010-1204306713481 ctime=1215607609074 Storagetype=data_node layoutVersion=-13

What ' s next?

If you are feeling comfortable, you can re-enters your Hadoop experience with me tutorial on how to code a simple MapReduce job in The Python programming language abound can serve as the basis for writing your, own MapReduce.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.