This paper introduces the comparison results of the data compression scheme of hive in Hadoop system and the specific compression method. A comparison of compression schemes
With regard to the selection of compression formats for Hadoop HDFS files, we tested them with multiple real track data, and came to the following conclusions:
1. The default compression encoding method of the system Defaultcodec is better than gzip compression coding in terms of compression performance or compression ratio. This is not consistent with some of the online views, many people on the Internet think gzip compression ratio is higher, estimated and Cloudera package and our track data type.
2. The rcfile of hive files is better than Seqence file (including record, BLOCK level) in compression ratio, compression efficiency, and query efficiency.
3. All compressed files can be decompressed as text files, but slightly larger than the original file, may be caused by the reorganization of the column.
About compressing files is suitable for other components as follows:
1. Pig does not support any form of compressed files.
2. Impala currently supports the Sequencefile compression format, but does not yet support the Rcfile compression format.
In summary :
Defaultcodec + rcfile is optimal in terms of both the spatial and temporal performance of compression and querying, but using this method will make it unusable for pig and Impala (Impala's incompatibility is uncertain whether it is temporary).
While the defaultcodec+ sequencefile in compression ratio, the query performance slightly worse than rcfile (compression ratio of about 6:5), but can support impala real-time query.
Recommended Solutions :
Compress historical data using Rcfile. Fackbook all hive tables use Rcfile to save data. second, local compression method
Only two steps are required:
1. Specify the compression method when creating the table, the default is not compressed, the following is an example:
Create External Table Track_hist (
ID bigint, URL string, referer string, keyword string, type int, gu_idstring,
.../* Omit middle section field here */..., string,ext_field10 string)
Partitioned by (DS string) stored as Rcfile location '/DATA/SHARE/TRACK_HISTK ';
2. Insert data is set to compress immediately
SET hive.exec.compress.output=true;
Insert Overwrite table track_histpartition (ds= ' 2013-01-01 ')
Select Id,url, .../* Omit middle section field here */..., ext_field10 fromtrackinfo
where ds= ' 2013-01-01 ';
third, global mode, modify the properties file
Set in Hive-site.xml:
<property>
<name>hive.default.fileformat</name>
<value>RCFile</value>
<description>default file format for CREATE TABLE statement. Options are Textfile and sequencefile. Users can explicitly say creat
E TABLE ... STORED as< textfile| Sequencefile> To Override</description>
</property>
<property>
<name>hive.exec.compress.output</name>
<value>true</value>
<description> This controls whether the final outputs of a query (to a Local/hdfs file or a hive table) is compressed . The Compres
Sion codec and other options is Determinedfrom hadoop config variables mapred.output.compress* </description>
</property>
Iv. Matters of note
1. The map stage output does not compress
2. Do not compress the output text when processing