The stupidest way to solve the error "Java.lang.UnsatisfiedLinkError:no Snappyjava in Java.library.path" using snappy compression

Source: Internet
Author: User

Previously wrote an article: http://blog.csdn.net/stark_summer/article/details/47361603, that time the Linux environment Spark use snappy way compression is not good, And today my colleague Hive on Hadoop uses snappy compression method also reported this mistake, now I, feel this problem must solve

I think, can only use the most stupid way to solve this problem, put the libsnappyjava.so file under the $JAVA _home/jre/lib/amd64/

The operation process is as follows:

First copy the $HADOOP _home/share/hadoop/common/lib/snappy-java-1.1.1.7.jar to the temp directory, and then unzip Snappy-java-1.1.1.7.jar

The following directories will be available after decompression:

4096 15:58 meta-inf
4096 Apr 16:05 org

Enter the directory where libsnappyjava.so is located:

$CD org/xerial/snappy/native/linux/x86_64/

You can see the following files:

Libsnappyjava.so

Copy to $JAVA _home/jre/lib/amd64/

The test procedure is as follows:

Import org.xerial.snappy.snappy;/** *  * Created by Stark_summer on 15/8/8. *   */public class Testsnappy {    Publ IC static void Main (string[] args) throws exception{        String input = "Hello snappy-java! Snappy-java is a jni-based wrapper of "                +" Snappy, a fast compresser/decompresser. ";        byte[] Compressed = snappy.compress (Input.getbytes ("UTF-8"));        byte[] uncompressed = snappy.uncompress (compressed);        string result = new String (Uncompressed, "UTF-8");        SYSTEM.OUT.PRINTLN (result);}    }


Compilation: Javac-classpath./snappy-java-1.1.1.7.jar Testsnappy.java

Execution: Java testsnappy

Execution Result:
Hello snappy-java! Snappy-java is a jni-based wrapper of Snappy, a fast compresser/decompresser.

Copy the "libsnappyjava.so" file to the Hadoop cluster: $JAVA _home/jre/lib/amd64/, Hive on Hadoop, using snappy mode compression is not an error, it's done.

Will spark_home/conf/spark-defaults.conf

Spark.io.compression.codec LZF Note or remove this line, spark will use the default snappy compression method, and no longer an error.


Ps:

It's the dumbest way to fix it now, but it's still going to be a matter of further troubleshooting.




Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

The stupidest way to solve the error "Java.lang.UnsatisfiedLinkError:no Snappyjava in Java.library.path" using snappy compression

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.