I. Installation PROTOBUF
Ubuntu system
1 Create a file in the/etc/ld.so.conf.d/directory libprotobuf.conf write the content/usr/local/lib otherwise the error will be reported while loading shared libraries:libprotoc.so .8:cannot Open Shared obj
2../configure Make&&make Install
2. Verify that the installation is complete
Protoc--version
Libprotoc 2.5.0
Two. Install the Snappy local library
Http://www.filewatcher.com/m/snappy-1.1.1.tar.gz.1777992-0.html
Download snappy-1.1.1.tar.gz
Unzip./configure
make&& Makeinstall
Check/usr/local/lib
Libsnappy.a
Libsnappy.la
Libsnappy.so
Libsnappy.so.1
libsnappy.so.1.2.0
three. Compile the source code for CDH Hadoop . (Join snappy Support)
Download link http://archive.cloudera.com/cdh5/cdh/5/
Hadoop-2.6.0-cdh5.11.0-src.tar.gz
Extract. Compiling with Maven
4. Check the file
Hadoop-2.6.0-cdh5.11.0/hadoop-dist/target/hadoop-2.6.0-cdh5.11.0/lib/native
Whether there is a local library of Hadoop in the directory and a local library of snappy
- The files in this directory are copied to the Lib/native directory under Hadoop in the Hadoop cluster and to the lib/native/linux-amd64-64 directory under HBase, and none are new and each node needs to be copied.
CP ~apk/hadoop-2.6.0-cdh5.11.0/hadoop-dist/target/hadoop-2.6.0-cdh5.11.0/lib/native/* ~/app/hadoop/lib/native/
6. Synchronizing the local library to another node
7. Configure Core-site.xml for Hadoop
Join
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.gzipcodec,org.apache.hadoop.io.compress.defaultcodec, Org.apache.hadoop.io.compress.bzip2codec,org.apache.hadoop.io.compress.snappycodec</value>
</property>
Configure Mapred-site.xml
Join
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>mapreduce.admin.user.env</name>
<value>LD_LIBRARY_PATH=/home/hadoop/app/hadoop/lib/native</value>
</property>
Configure Hbase-site.xml for HBase
Join
<property>
<name>hbase.block.data.cachecompressed</name>
<value>true</value>
</property>
8. Restart Hadoop for HDFs and yarn
9. Verify that the snappy is successful.
Hadoop checknative
18/03/07 17:33:36 WARN bzip2. bzip2factory:failed to Load/initialize NATIVE-BZIP2 library system-native, would use Pure-java version
18/03/07 17:33:36 INFO zlib. Zlibfactory:successfully Loaded & initialized Native-zlib Library
Native Library checking:
Hadoop:true/home/hadoop/app/hadoop/lib/native/libhadoop.so
Zlib:true/lib/x86_64-linux-gnu/libz.so.1
Snappy:true/home/hadoop/app/hadoop/lib/native/libsnappy.so.1
Lz4:true revision:10301
Bzip2:false
Openssl:true/usr/lib/x86_64-linux-gnu/libcrypto.so
See Snappy has successfully supported
Run a MapReduce task
Hadoop jar ~/app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.11.0.jar wordcount/input/ Gisdata/output
If it runs correctly. It proves that snappy is not a problem. If there is.
Exception in thread "main" Java.lang.UnsatisfiedLinkError: Org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy () Z
Please check the local library configuration of the Mapred-site.xml
10 Start HBase.
Create a snappy table first
Create ' snappytest ',{name=> ' f ', COMPRESSION = ' SNAPPY '}
Descibe ' Snappytest '
TTL = ' FOREVER ', COMPRESSION = ' SNAPPY ', min_versions = ' 0 ' to see this SNAPPY on.
The point is we're going to compress the existing table
Can be executed outside the shell
$ echo "Disable ' SnappyTest2 '" | HBase Shell #禁用表
$ echo "desc ' snappyTest2 '" | HBase Shell #查看表结构
$ echo "Alter ' snappyTest2 ',{name=> ' f ', COMPRESSION = ' SNAPPY '}" | HBase Shell #压缩修改为snappy
$ echo "Enable ' SnappyTest2 '" | HBase Shell #使用该表
$ echo "Major_compact ' SnappyTest2 '" | HBase Shell #最好使该表的region Compact Once
You can also manually compress the HBase shell into the shell. Compression will find that the data has about 40% compression ratio
Java code creates an HBase table that requires only
Hcolumndescriptor Hcolumndesc = new Hcolumndescriptor ("Data");
Hcolumndesc.setcompressiontype (algorithm.snappy);//This sentence is the key
Hadoop CDH Version Installation Snappy