Solve a series of problems such as EXCEPTION:ORG.APACHE.HADOOP.IO.NATIVEIO.NATIVEIO$WINDOWS.ACCESS0 (Ljava/lang/string;i) Z

Source: Internet
Author: User
Tags mkdir hdfs dfs log4j
solve a series of problems, such as EXCEPTION:ORG.APACHE.HADOOP.IO.NATIVEIO.NATIVEIO$WINDOWS.ACCESS0 (Ljava/lang/string;i) Z, Ljavalangstring

one. Brief introduction

Debug HADOOP2 code on Eclipse on Windows, So we're configuring the Hadoop-eclipse-plugin-2.6.0.jar plugin under Windows and running the Hadoop code with a series of questions that could finally run the code for several days. Let's take a look at the problem and how to solve it and provide a reference for the same problems that I have encountered.

HADOOP2 's Wordcount.java statistic code is as follows:

Import java.io.IOException;

Import Java.util.StringTokenizer;
Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.fs.Path;
Import org.apache.hadoop.io.IntWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.Job;
Import Org.apache.hadoop.mapreduce.Mapper;
Import Org.apache.hadoop.mapreduce.Reducer;
Import Org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

Import Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount {public static class Tokenizermapper extends Mapper<object, text, text, INTWRITABLE&G T
    {Private final static intwritable one = new intwritable (1);

    Private Text Word = new text ();
      public void Map (Object key, Text value, Context context) throws IOException, Interruptedexception {
      StringTokenizer ITR = new StringTokenizer (value.tostring ());
        while (Itr.hasmoretokens ()) {Word.set (Itr.nexttoken ());
     Context.write (Word, one); }} public static class Intsumreducer extends Reducer<text,intwritable,text,intwritable> {PR

    Ivate intwritable result = new intwritable ();
                       public void reduce (Text key, iterable<intwritable> values, context
      ) throws IOException, interruptedexception {int sum = 0;
      for (intwritable val:values) {sum + = Val.get ();
      } result.set (sum);
    Context.write (key, result);
    } public static void Main (string[] args) throws Exception {Configuration conf = new Configuration ();
    Job Job = job.getinstance (conf, word count);
    Job.setjarbyclass (Wordcount.class);
    Job.setmapperclass (Tokenizermapper.class);
    Job.setcombinerclass (Intsumreducer.class);
    Job.setreducerclass (Intsumreducer.class);
    Job.setoutputkeyclass (Text.class);
    Job.setoutputvalueclass (Intwritable.class);
    Fileinputformat.addinputpath (Job, New Path (args[0)); FileoutpuTformat.setoutputpath (Job, New Path (args[1));
  System.exit (Job.waitforcompletion (true)? 0:1);
 }
}


question one. An internal error occurred during: "Map/reducelocation status Updater". Java.lang.NullPointerException

We hadoop-eclipse-plugin-2.6.0.jar put it into Eclipse's plugins directory, our Eclipse directory is F:\tool\eclipse-jee-juno-SR2\ Eclipse-jee-juno-sr2\plugins, restart Eclipse, and then, open window-->preferens, you can see the Hadoop map/reduc option, and then click on the show an internal Error occurredduring: "Map/reduce location Status Updater". Java.lang.NullPointerException, as shown in the figure:

Solve:

We found that the HADOOP2 had not yet created the input and output directory, and first set up a folder on HDFs.

#bin/hdfs Dfs-mkdir–p/user/root/input

#bin/hdfs dfs-mkdir-p/user/root/output

We see our two directories in the Eclipse's DFS locations directory, as shown in the figure:

Question two. Exception in thread "main" Java.lang.NullPointerException Atjava.lang.ProcessBuilder.start (Unknown Source)

This error occurred while running HADOOP2 's Wordcount.java code.

  Log4j:warnplease Initialize the log4j system properly.
Log4j:warn seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" Java.lang.NullPointerException
       Atjava.lang.ProcessBuilder.start (Unknown Source)
       Atorg.apache.hadoop.util.Shell.runCommand (shell.java:482)
       Atorg.apache.hadoop.util.Shell.run (Shell.java : 455)
       Atorg.apache.hadoop.util.shell$shellcommandexecutor.execute (shell.java:715)
       Atorg.apache.hadoop.util.Shell.execCommand (shell.java:808)
       Atorg.apache.hadoop.util.Shell.execCommand ( shell.java:791) at
       

Analysis:

When downloading Hadoop2 above version, there is no winutils.exe in Hadoop2 's Bin directory

Solve:

1. Download https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/ Master Download Hadoop-common-2.2.0-bin-master.zip, and then unpack, the Hadoop-common-2.2.0-bin-master under the bin all copied to our downloaded HADOOP2 binHadoop2 The/bin directory. As shown in the figure:

2.eclipse-"window-" Preferences under the Hadoop map/peduce put the download on our disk's Hadoop directory, as shown in the figure:

3.HADOOP2 configuration variable Environment hadoop_home and path, as shown in the figure:

question three. Exception in thread "main" JAVA.LANG.UNSATISFIEDLINKERROR:ORG.APACHE.HADOOP.IO.NATIVEIO.NATIVEIO$WINDOWS.ACCESS0 ( Ljava/lang/string;i) Z

When we have solved the problem three o'clock, this problem occurs when the Wordcount.java code is run

Log4j:warn No appenders could be found Forlogger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
Log4j:warn Please initialize the Log4jsystem properly.
Log4j:warn seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Exception in thread "main" JAVA.LANG.UNSATISFIEDLINKERROR:ORG.APACHE.HADOOP.IO.NATIVEIO.NATIVEIO$WINDOWS.ACCESS0 ( Ljava/lang/string;i) Z Atorg.apache.hadoop.io.nativeio.nativeio$windows.access0 (Native method) atorg.apache.h Adoop.io.nativeio.nativeio$windows.access (nativeio.java:557) Atorg.apache.hadoop.fs.FileUtil.canRead ( fileutil.java:977) Atorg.apache.hadoop.util.DiskChecker.checkAccessByFileMethods (diskchecker.java:187) atorg
       . Apache.hadoop.util.DiskChecker.checkDirAccess (diskchecker.java:174) Atorg.apache.hadoop.util.DiskChecker.checkDir (diskchecker.java:108) atorg.apache.hadoop.fs.localdirallocator$ Allocatorpercontext.confchanged (localdirallocator.java:285) atorg.apache.hadoop.fs.LocalDirAllOcator$allocatorpercontext.getlocalpathforwrite (localdirallocator.java:344)
       Atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite (localdirallocator.java:150)
       Atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite (localdirallocator.java:131)
       Atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite (localdirallocator.java:115) Atorg.apache.hadoop.mapred.LocalDistributedCacheManager.setup (localdistributedcachemanager.java:131)

Analysis:

C:\Windows\System32 under the lack of Hadoop.dll, copy this file to C:\Windows\System32 below.

Solve:

Hadoop-common-2.2.0-bin-master under the bin Hadoop.dll put under the C:\Windows\System32, and then restart the computer, perhaps not so simple, or there is such a problem.

We are continuing the analysis:

We look at the wrong atorg.apache.hadoop.io.nativeio.nativeio$windows.access (nativeio.java:557) to see the 557 lines of this class Nativeio, as shown in the figure:

The only method of Windows is used to check the request of the current process for access to the given path, so we first give the ability to access, we modify the source code first, and return true to allow access. We download the corresponding Hadoop source code, hadoop-2.6.0-src.tar.gz decompression, hadoop-2.6.0-src\hadoop-common-project\hadoop-common\src\main\java\ Org\apache\hadoop\io\nativeio under Nativeio.java Copy to the corresponding Eclipse's project, and then modify 557 behavior return true as shown in the figure:

question four: Org.apache.hadoop.security.accesscontrolexception:permissiondenied:user=zhengcy, access=WRITE,inode= "/ User/root/output ": Root:supergroup:drwxr-xr-x

This problem occurs when we execute the Wordcount.java code

2014-12-18 16:03:24,092 WARN (org.apache.hadoop.mapred.localjobrunner:560)-job_local374172562_0001 Org.apache.hadoop.security.AccessControlException:Permission denied:user=zhengcy, Access=write, inode= "/user/root /output ": Root:supergroup:drwxr-xr-x at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission (fspermissionchecker.java:271) at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check (fspermissionchecker.java:257) at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check (fspermissionchecker.java:238) at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission (fspermissionchecker.java:179) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission (fsnamesystem.java:6512) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission (fsnamesystem.java:6494) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess (fsnamesystem.java:6446) at Org.apache.hadoop.hdfs.server.namEnode. Fsnamesystem.mkdirsinternal (fsnamesystem.java:4248) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt (fsnamesystem.java:4218) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs (fsnamesystem.java:4191) at
 Org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs (namenoderpcserver.java:813)

Analysis:

We do not have access to the output directory.

Solve:

The directory where we set up the HDFs configuration is located in the Hdfs-site.xml configuration HDFs file, I have introduced in the Hadoop pseudo distributed deployment, we are here to review, as shown in the figure:

We add in the hdfs-site.xml under this Etc/hadoop

<property>

<name>dfs.permissions</name>
<value>false</value>
</property>

The settings do not have permissions, but we cannot set this on a formal server.

question five: File/usr/root/input/file01._copying_ could only being replicated to 0 nodes instead ofminreplication (=1) There are 0 Datanode (s) running and no node (s) are Excludedin this operation

As shown in the figure:

Analysis:

We namenode–format after the first execution #hadoop and then in the execution #sbin/start-all.sh

In the execution #jps, you can see the Datanode, the #jps is not visible when you execute the #hadoop Namenode–format and then execute Datanode, as shown in the figure:

Then we want to put the text into the input directory to execute Bin/hdfs dfs-put/usr/local/hadoop/hadoop-2.6.0/test/*/user/root/input upload/test/* file to HDFs/user/ In the root/input, there are such problems,

Solve:

Is that we execute too many times hadoopnamenode–format, in creating multiple, our corresponding HDFS directories delete hdfs-site.xml configuration of the Save Datanode and Namenode directories.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.