Solve Exception: org. apache. hadoop. io. nativeio. NativeIO $ Windows. access0 (Ljava/lang/String; I) Z and other issues, ljavalangstring

Source: Internet
Author: User
Tags hdfs dfs

Solve Exception: org. apache. hadoop. io. nativeio. NativeIO $ Windows. access0 (Ljava/lang/String; I) Z and other issues, ljavalangstring

I. Introduction

Windows Eclipse debugging Hadoop2 code, so we in windows Eclipse configuration hadoop-eclipse-plugin-2.6.0.jar plug-in, and when running Hadoop code appeared a series of problems, after several days, the code can finally run. Next, let's take a look at the problem and how to solve it, and provide it as a reference for the same problems as I encountered.

The WordCount. java Statistical Code of Hadoop2 is as follows:

import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCount {  public static class TokenizerMapper       extends Mapper<Object, Text, Text, IntWritable>{    private final static IntWritable one = new IntWritable(1);    private Text word = new Text();    public void map(Object key, Text value, Context context                    ) throws IOException, InterruptedException {      StringTokenizer itr = new StringTokenizer(value.toString());      while (itr.hasMoreTokens()) {        word.set(itr.nextToken());        context.write(word, one);      }    }  }  public static class IntSumReducer       extends Reducer<Text,IntWritable,Text,IntWritable> {    private IntWritable result = new IntWritable();    public void reduce(Text key, Iterable<IntWritable> values,                       Context context                       ) throws IOException, InterruptedException {      int sum = 0;      for (IntWritable val : values) {        sum += val.get();      }      result.set(sum);      context.write(key, result);    }  }  public static void main(String[] args) throws Exception {    Configuration conf = new Configuration();    Job job = Job.getInstance(conf, "word count");    job.setJarByClass(WordCount.class);    job.setMapperClass(TokenizerMapper.class);    job.setCombinerClass(IntSumReducer.class);    job.setReducerClass(IntSumReducer.class);    job.setOutputKeyClass(Text.class);    job.setOutputValueClass(IntWritable.class);    FileInputFormat.addInputPath(job, new Path(args[0]));    FileOutputFormat.setOutputPath(job, new Path(args[1]));    System.exit(job.waitForCompletion(true) ? 0 : 1);  }}


Problem 1. An internal error occurred during: "Map/Reducelocation status updater". java. lang. NullPointerException

Our hadoop-eclipse-plugin-2.6.0.jar is put under the Eclipse plugins directory, our Eclipse directory is F: \ tool \ eclipse-jee-juno-SR2 \ eclipse-jee-juno-SR2 \ plugins, restart Eclipse, then, open Window --> Preferens, you can see the Hadoop Map/Reduc option, and then click An internal error occurredduring: "Map/Reduce location status updater ". java. lang. nullPointerException ,:

Solution:

We found that the newly configured and deployed Hadoop2 has not yet created the input and output directories. First, create a folder on hdfs.

# Bin/hdfs dfs-mkdir-p/user/root/input

# Bin/hdfs dfs-mkdir-p/user/root/output

We can see these two directories in the DFS Locations directory of Eclipse ,:

Problem 2: Exception in thread "main" java. lang. NullPointerException atjava. lang. ProcessBuilder. start (Unknown Source)

This error occurs when you run the WordCount. java code of Hadoop2,

  log4j:WARNPlease initialize the log4j system properly.log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.Exception in thread "main" java.lang.NullPointerException       atjava.lang.ProcessBuilder.start(Unknown Source)       atorg.apache.hadoop.util.Shell.runCommand(Shell.java:482)       atorg.apache.hadoop.util.Shell.run(Shell.java:455)       atorg.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)       atorg.apache.hadoop.util.Shell.execCommand(Shell.java:808)       atorg.apache.hadoop.util.Shell.execCommand(Shell.java:791)       at

Analysis:

When hadoop2is uploaded, winutils.exe is not found in the bindirectory of hadoop2.

Solution:

1. Download idea. :

2. Hadoop Map/Peduce under Eclipse-"window-" Preferences introduces the Hadoop directory on our disk for downloading ,:

 

3. Configure the variable environment HADOOP_HOME and path in Hadoop2, as follows ,:

 

Problem 3: Exception in thread "main" java. lang. UnsatisfiedLinkError: org. apache. hadoop. io. nativeio. NativeIO $ Windows. access0 (Ljava/lang/String; I) Z

When we solve problem 3 and run the WordCount. java code, this problem occurs.

log4j:WARN No appenders could be found forlogger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).log4j:WARN Please initialize the log4jsystem properly.log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z       atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)       atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)       atorg.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)       atorg.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)       atorg.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)       atorg.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)       atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)       atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)       atorg.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)

Analysis:

In C: \ Windows \ System32, hadoop. dll is missing. Copy this file to C: \ Windows \ System32.

Solution:

Hadoop-common-2.2.0-bin-master bin hadoop. dll put in C: \ Windows \ System32, and then restart the computer, maybe not so simple, or such a problem.

 

We are continuing to analyze:

In the error atorg. apache. hadoop. io. nativeio. NativeIO $ Windows. access (NativeIO. java: 557), let's look at the 557 rows of NativeIO class ,:

 

The only method in Windows is used to check the request of the current process and grant the access permission to the specified path. Therefore, we can grant the access permission first, first modify the source code, and return true to allow access. We download the hadoopsource code, hadoop-2.6.0-src.tar.gz decompression, hadoop-2.6.0-src \ hadoop-common-project \ hadoop-common \ src \ main \ java \ org \ apache \ hadoop \ io \ nativeio under NativeIO. copy java to the corresponding Eclipse project, and then modify the 557 behavior return true:


Question 4: org. apache. hadoop. security. accessControlException: Permissiondenied: user = zhengcy, access = WRITE, inode = "/user/root/output": root: supergroup: drwxr-xr-x

This problem occurs when we run the WordCount. java code.

2014-12-18 16:03:24,092  WARN (org.apache.hadoop.mapred.LocalJobRunner:560) - job_local374172562_0001org.apache.hadoop.security.AccessControlException: Permission denied: user=zhengcy, access=WRITE, inode="/user/root/output":root:supergroup:drwxr-xr-xat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)

Analysis:

We do not have the permission to access the output directory.

Solution:

The directory where we set hdfs configuration is where the hdfs file is stored in the hdfs-site.xml configuration, I have introduced it in the hadoop pseudo distributed deployment, and we will review it here ,:

We add the hdfs-site.xml under this etc/hadoop

<Property>

<Name> dfs. permissions </name>
<Value> false </value>
</Property>

You do not have permission to set it, but we cannot set it on the official server.

Question 5: File/usr/root/input/file01. _ COPYING _ cocould only be replicated to 0 nodes instead ofminRepLication (= 1) There are 0 datanode (s) running and no node (s) are excludedin this operation

:

Analysis:

We finish executing # hadoop namenode-format for the first time and then executing # sbin/start-all.sh

In the execution # jps, You can see Datanode. In the execution # hadoop namenode-format and then execute # jps, you cannot see Datanode ,:

Then we want to put the text in the input directory and execute bin/hdfs dfs-put/usr/local/hadoop/hadoop-2.6.0/test/*/user/root/input to upload the/test/* file. to/user/root/input of hdfs, this problem occurs,

Solution:

We performed hadoopnamenode-format too many times, while creating multiple, We corresponding hdfs directory to delete the Save datanode and namenode directories of the hdfs-site.xml configuration.







Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.