about the the selection of compression formats for Hadoop HDFS files, which we tested with a number of real track data, came to the following conclusion:
1. system's default compression encoding method Defaultcodec is better than GZIP compression coding in terms of compression performance or compression ratio . This is not consistent with some of the online views, many people on the internet think GZIP the compression ratio is higher, the estimation and Cloudera the package and our Track data type.
2. The rcfile of Hive files is better than Seqence file ( including RECORD, BLOCK level ) in compression ratio, compression efficiency, and query efficiency . .
3. all compressed files can be decompressed as normal TEXT file, but slightly larger than the original file, may be caused by a row reorganization.
About compressing files is suitable for other components as follows:
1. Pig does not support any form of compressed files.
2. Impala currently supports the sequencefile compression format, but does not yet support the Rcfile compression format.
Impala: An upgraded version of MapReduce
In summary :
In terms of the spatial and temporal performance of compression and querying, Defaultcodec + rcfile compression is optimal, but using this method will make it unusable for pig and Impala (Impala's incompatibility is uncertain whether it is temporary).
While Defaultcodec + Sequencefile in compression ratio, the query performance slightly worse than rcfile (compression ratio of about 6:5), but can support impala real-time query.
Recommended Solutions :
compress historical data using Rcfile. Fackbook all hive tables use Rcfile to save data.
Local compression method
Only two steps are required:
Specify the compression method when creating the table, which is not compressed by default, the following is an example:
Create External Table Track_hist (
ID bigint, URL string, referer string, keyword string, type int, gu_id string,
... ./* Omit middle section field here */..., string,ext_field10 string)
Partitioned by (DS string) stored as Rcfile Location '/DATA/SHARE/TRACK_HISTK ';
2. Insert data is set to compress immediately
SET hive.exec.compress.output=true;
Insert Overwrite table track_hist partition (ds= ' 2013-01-01 ')
Select Id,url, .../* Omit middle section field here */..., ext_field10 from Trackinfo
where ds= ' 2013-01-01 ';
Global mode, modify properties file
Set in hive-site.xml :
<property>
<name>hive.default.fileformat</name>
<value>RCFile</value>
<description>default file format for CREATE TABLE statement. Options are Textfile and sequencefile. Users can explicitly say creat
E TABLE ... STORED as < textfile| Sequencefile> To Override</description>
</property>
<property>
<name>hive.exec.compress.output</name>
<value>true</value>
<description> This controls whether the final outputs of a query (to a Local/hdfs file or a hive table) is Compresse D. The Compres
Sion codec and other options is determined from Hadoop config variables mapred.output.compress* </description>
</property>
Precautions
1 , map phase output is not compressed and compressed at the reduce end
2 , the output text is not compressed when processed
Hive Data Compression