Hadoop 2.2.0 and HBase-0.98 installation snappy

Source: Internet
Author: User

1, install the required dependencies and software

The dependent packages that need to be installed are:

GCC, C + +, autoconf, Automake, Libtool

The supporting software that needs to be installed is:

JAVA6, Maven

For the above dependency package, if under Ubuntu, use the sudo apt-get install * command installed, if under CentOS, use the sudo yum install * command to install.

For a companion Java and Maven installation, refer to the post "Linux Java, maven, Tomcat installation".

2. Download snappy-1.1.2

Address available for download:

Address One: https://code.google.com/p/snappy/wiki/Downloads?tm=2

Address two: http://download.csdn.net/detail/iam333/7725883

3. Compile and install dynamically

Unzip to a folder after download, here assume the address bit to unzip the home directory. Then execute the following command as follows:

$ cd ~/snappy-1.1.2$ sudo./configure$ sudo./make$ sudo make install
Then execute the following command to see if the installation was successful.

$ cd/usr/local/lib$ ll libsnappy.*-rw-r--r--1 root root 233506 7 11:56 libsnappy.a-rwxr-xr-x 1 root root    953 7 11:56 libsnappy.lalrwxrwxrwx 1 root root     7 11:56 libsnappy.so libsnappy.so.1.2.1lrwxrwxrwx 1 root root     7 11:56 libsnappy.so.1-libsnappy.so.1.2.1-rwxr-xr-x 1 root root 147758 7 11:56 libsnappy.so.1.2.1
If an error is not encountered during the installation and the/usr/local/lib directory has the above file, the installation is successful.

4, Hadoop-snappy source code compilation

1) Download the source code, two ways

A, install SVN, if it is Ubuntu, use sudo apt-get install subversion, if it is CentOS, install with the sudo yum install Subversion command.

B. Use SVN to checkout the source code from Google's SVN repository using the following command:

$ svn Checkout Http://hadoop-snappy.googlecode.com/svn/trunk/hadoop-snappy
This copies the source code of the hadoop-snappy into the Hadoop-snappy directory under the directory where the command is executed.

However, because Google's services in the mainland is always a problem, so you can also choose to download directly, address: http://download.csdn.net/detail/iam333/7726023

2) Compile Hadoop-snappy source code

To switch to the Hadoop-snappy source directory, execute the following command:

A, if the above installation snappy is using the default path, the command is:

MVN Package
b, if the snappy installed above uses a custom path, the command is:

MVN Package [-dsnappy.prefix=snappy_installation_dir]
Where Snappy_installation_dir bit snappy installation path.

Issues that may occur during the compilation process:

A)/root/modules/hadoop-snappy/maven/build-compilenative.xml:62:execute Failed:java.io.IOException:Cannot run Program ' autoreconf ' (in Directory "/ROOT/MODULES/HADOOP-SNAPPY/TARGET/NATIVE-SRC"): java.io.ioexception:error=2, No Such file or directory

Solution: Description missing file, but this file is under Target, is generated automatically during the compilation process, originally should not exist, this is asked what? The fundamental problem is not the lack of documentation, but the fact that Hadoop snappy requires a certain precondition. So please refer to the top installation dependencies package for installation dependent packages.

b) The following error message appears:

Solution: The official documentation for Hadoop snappy only lists the need for GCC, not what version of GCC is required. In fact, Hadoop snappy needs to be gcc4.4. If the GCC version is higher than the default version 4.4, an error will be available.

Assuming that the system is CentOS, use the following command: (note: Ubuntu needs to change sudo yum install to sudo apt-get install)

$ sudo yum install gcc-4.4$ sudo rm/usr/bin/gcc$ sudo ln-s/usr/bin/gcc-4.4/usr/bin/gcc
Use the following command to see if the substitution succeeded:

$ gcc--VERSIONGCC (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) Copyright (C) free software Foundation, inc.this are free sof Tware; See the source for copying conditions.  There is nowarranty; Not even to merchantability or FITNESS for A particular PURPOSE.
c) The following error message appears:

[exec]/bin/bash./libtool--tag=cc   --mode=link gcc-g-wall-fpic-o2-m64-g-o2-version-info 0:1:0-l/usr/local//l Ib-o libhadoopsnappy.la-rpath/usr/local/lib Src/org/apache/hadoop/io/compress/snappy/snappycompressor.lo src/org/ Apache/hadoop/io/compress/snappy/snappydecompressor.lo  -LJVM-LDL [exec]/usr/bin/ld:cannot find-ljvm[exec] Collect2:ld returned 1 exit status[exec] Make: * * [libhadoopsnappy.la] Error 1[exec] libtool:link:gcc-shared  -fPIC- Dpic  SRC/ORG/APACHE/HADOOP/IO/COMPRESS/SNAPPY/.LIBS/SNAPPYCOMPRESSOR.O src/org/apache/hadoop/io/compress/ SNAPPY/.LIBS/SNAPPYDECOMPRESSOR.O   -l/usr/local//lib-ljvm-ldl  -o2-m64-o2   -wl,-soname-wl, Libhadoopsnappy.so.0-o. libs/libhadoopsnappy.so.0.0.1
This is because the libjvm.so symbolic that installs the JVM is not linked to Usr/local/lib. If your system is 64-bit, you can go to/root/bin/jdk1.6.0_37/jre/lib/amd64/server/to see libjvm.so link to the place, here to modify the following, using the command:

$ sudo ln-s/usr/local/jdk1.6.0_45/jre/lib/amd64/server/libjvm.so/usr/local/lib/
Problem can be solved.


5. Hadoop 2.2.0 Configuration Snappy

After the Hadoop-snappy compilation succeeds, some files are generated in the target directory under the Hadoop-snappy directory, with a file named: hadoop-snappy-0.0.1-snapshot.tar.gz

1) Unzip target under Hadoop-snappy-0.0.1-snapshot.tar.gz, unzip, copy lib file

$ sudo cp-r ~/snappy-hadoop/target/hadoop-snappy-0.0.1-snapshot/lib/native/linux-amd64-64/* $HADOOP _home/lib/ native/linux-amd64-64/
2) Copy the Hadoop-snappy-0.0.1-snapshot.jar under Target to $hadoop_home/lib.

3) Configure $hadoop_home/etc/hadoop/hadoop-env.sh, add:

Export ld_library_path= $LD _library_path: $HADOOP _home/lib/native/linux-amd64-64/:/usr/local/lib/
4) configuration $hadoop_home/etc/hadoop/mapred-site.xml, in this file, all the configuration options related to compression are:
<property> <name>mapred.output.compress</name> <value>false</value> <description  >should the job outputs be compressed?  </description></property> <property> <name>mapred.output.compression.type</name>               <value>RECORD</value> <description>if The job outputs is to compressed as sequencefiles, how should They be compressed?  Should be one of NONE, RECORD or BLOCK.  </description></property> <property> <name>mapred.output.compression.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>if the job outputs is  Compressed, how should they is compressed? </description></property> <property> <name>mapred.compress.map.output</name> < Value>false</value> <description>should The outputs of the maps be compressed before being S ENT across the network. UseS sequencefile compression. </description></property> <property> <name>mapred.map.output.compression.codec</name > <value>org.apache.hadoop.io.compress.DefaultCodec</value> <description>if the map outputs is  Compressed, how should they is compressed? </description></property>
Can be configured according to their own needs. Among them, the types of codec are as follows:

<property>    <name>io.compression.codecs</name>    <value>      Org.apache.hadoop.io.compress.GzipCodec,      Org.apache.hadoop.io.compress.DefaultCodec,      Org.apache.hadoop.io.compress.BZip2Codec,      org.apache.hadoop.io.compress.SnappyCodec    </value>  </property>
Snappycodec represents the snappy compression method.

5) After the configuration is ready, restart the Hadoop cluster.

6, HBase 0.98 configuration Snappy

1) Configure the Lib file in HBase lib/native/linux-amd64-64/. For simplicity, all we need to do is copy the $hadoop_home/lib/native/linux-amd64-64/down Lib file to the appropriate HBase directory:

$ sudo cp-r $HADOOP _home/lib/native/linux-amd64-64/* $HBASE _home/lib/native/linux-amd64-64/
2) Configure HBase environment variable hbase-env.sh
Export ld_library_path= $LD _library_path: $HADOOP _home/lib/native/linux-amd64-64/:/usr/local/lib/export HBASE_ Library_path= $HBASE _library_path: $HBASE _home/lib/native/linux-amd64-64/:/usr/local/lib/export CLASSPATH=$ CLASSPATH: $HBASE _library_path
Note: Don't forget to configure Hadoop_home and Hbase_home at the beginning of habase-env.sh.

3) Once configured, restart HBase.

4) Verify that the installation is successful

Under the installation directory of HBase, execute the following statement:

$ bin/hbase shell2014-08-07 15:11:35,874 INFO  
Then execute the CREATE statement:

To view the created Test_snappy table:

HBase (main):002:0> describe ' test_snappy ' DESCRIPTION                                                                     ENABLED ' Test_snappy ', {NAME = ' cf ', data_block_encoding = ' NONE ', bloomfilter = ' ROW ', Replication_scop E = ' 0 ', VERSIONS = ' 1 ', COMPRESSIO true N  = ' SNAPPY ', min_versions = ' 0 ', TTL = ' 2147483647 ', keep_deleted_cells = ' false ', BLOCKSIZE = ' 65536 ', In_memory = ' false ', BLOC Kcache = ' t                                                                                                                                                                                                   Rue '} 1 row (s) in 0.0420 seconds
As you can see, COMPRESSION = ' SNAPPY '.

Next, try inserting the data:

HBase (main):003:0> put ' test_snappy ', ' key1 ', ' cf:q1 ', ' value1 ' 0 row (s) in 0.0790 secondshbase (main):004:0>
To traverse the Test_snappy table try:

HBase (main):004:0> scan ' test_snappy ' ROW                                                    column+cell                                                                                                                                                    key1                                                  column=cf:q1, timestamp=1407395814255, Value=value1                                                                                                           
The above procedure can be executed correctly, which indicates the configuration is correct.

Error Resolution:

A) after configuration, the following exception occurs when starting HBase:

WARN [main ] Util.CompressionTest:Can ' t instantiate Codec:snappyjava.io.IOException:java.lang.UnsatisfiedLinkError: Org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy () Zat Org.apache.hadoop.hbase.util.CompressionTest.testCompression (compressiontest.java:96) at Org.apache.hadoop.hbase.util.CompressionTest.testCompression (compressiontest.java:62) at Org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs (hregionserver.java:660) at Org.apache.hadoop.hbase.regionserver.hregionserver.<init> (hregionserver.java:538) at Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method) at Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:57) at Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:45) at Java.lang.reflect.Constructor.newInstance (constructor.java:526) 
The instructions are not configured yet, check the configuration in hbase-env.sh to see if you are configured correctly.

Reprint Please specify Source:http://blog.csdn.net/iAm333

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.