Original address: https://www.cnblogs.com/justinzhang/p/4983673.html
In this article, you'll learn how to package maven-dependent packages together in a jar package. When you use maven-assembly to make a jar, when you provide this jar to other engineering references, you report the following error:
Error occurred:
Log4j:warn No Appenders could is found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
Log4j:warn Please initialize the log4j system properly.
Log4j:warn http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
slf4j:failed to load Class "Org.slf4j.impl.StaticLoggerBinder". Slf4j:defaulting to No-operation (NOP) Logger implementation Slf4j:see http://www.slf4j.org/codes.html#
Staticloggerbinder for further details. Exception in thread ' main ' java.io.IOException:No filesystem for Scheme:hdfs at Org.apache.hadoop.fs.FileSystem.get
Filesystemclass (filesystem.java:2421) at Org.apache.hadoop.fs.FileSystem.createFileSystem (filesystem.java:2428) At org.apache.hadoop.fs.filesystem.access$200 (filesystem.java:88) at org.apache.hadoop.fs.filesystem$ Cache.getinternal (filesystem.java:2467) at Org.apache.hadoop.fs.filesystem$cache.get (FileSystem.java:2449) at or G.apache.hadoop.fs.filesystem.get (filesystem.java:367) at Org.apachE.hadoop.fs.filesystem$1.run (filesystem.java:156) at Org.apache.hadoop.fs.filesystem$1.run (FileSystem.java:153) At Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs (subject.java:422 ) at Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1491) at ORG.APACHE.HADOOP.F S.filesystem.get (filesystem.java:153) at com.cetc.di.hdfsfilesystem.<init> (hdfsfilesystem.java:41) in call Hdfs. Main.main (main.java:11)
However, in the absence of a jar package in the project, it is normal to run, through a long period of observation and analysis, found that the Hadoop filesystem related to two packages,
respectively: Hadoop-hdfs-2.7.1.jar and Hadoop-common-2.7.1.jar,
The Services directory in the Meta-inf of these two packages has the following contents:
As you can see, these two packages are available under the Services directory, org.apache.hadoop.fs.FileSystem this file.
When using Maven-assembly-plugin, all dependent packages are unpack and then in the pack, which appears,
When the same file is overwritten, let's look at what is left in the packed package:
As you can see, Maven-assembly-plugin (Fatjar is the same), The services content in the Hadoop-common.jar is punched into the final jar package, and the content of the services is overwritten in the Hadoop-hdfs.jar package. Because our function calls are written in this way:
The Hdfs://ip:port schema was used in the function, and the implementation of the schema could not be found in the resulting final jar package. So it throws out
Java.io.IOException:No filesystem for Scheme:hdfs
The solution is to display this class when setting up the configuration of Hadoop: "Org.apache.hadoop.hdfs.DistributedFileSystem:
Configuration.set ("Fs.hdfs.impl", "Org.apache.hadoop.hdfs.DistributedFileSystem");
And then repackaging, everything works OK.
The content under the Services file in the Meta-inf in the jar package involves the concept of servicelocator:
Detailed introduction, see Java Official document: http://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html
Check out this article to find out how Servicelocator works specifically http://www.concretepage.com/java/serviceloader-java-example