Hive Build table does not use LZO storage format, but the data is a problem in lzo format

Source: Internet
Author: User

Today, the micro-broad data platform to send e-mail. They have a hql execution failure. But from the gateway above the log can not see what is the cause of, I helped to look at a bit. Finally, the solution to the problem is found, the following is the analysis process:

1, the failure of the implementation of the HQL:

INSERT OVERWRITE TABLE brand_ad_user_with_interact_score_3 Select A.uid, A.brand, a.friend, Case b.weight if NULL then ' 0.000000 ' ELSE b.weight endfrom brand_ad_2hop_3 aleft OUTER join Ods_bas_user_interact_score_to_thin_3 bon (a.uid = B.fid and a.friend = B.tid);
The HQL is very easy and is associated with two tables. Then output to a different table. is an ordinary common join, no GROUP by operation. So there's no data skew problem with maps.

2. View Job Log

According to the 50030 page to see the status of the job and log information, the job status is killed, the map task has not been executed has been killed, and was killed the map task execution time of more than 10 hours. For example, as seen in:


According to the analysis of 1. The job does not store data skew from the top of the HQL, so why is there a single map execution time of more than 10 hours, looking at the Kill Map task counter information, such as the following:


The single map task reads 10G of data from the HDFs. No, it shouldn't be. The data files that are processed are not fragmented, and a single map task processes a single large file. With this kind of push test, I went to check the hql inside the two table folder below the file, sure enough, the following are all the files in the Lzo format, but have not built an index. And

Brand_ad_2hop_3 table The following individual individual files reached 10g+, this should be the reason: Lzo format files are not indexed. Data files cannot be fragmented, resulting in the execution of a single file that can only be handled by a map task, assuming that a single file is very large. The map task will be processed for a very long time.

After checking the brand_ad_2hop_3 statement, we found that the storage format is text.

Now that we have found the cause of the problem, here's how to solve it:

(1), index the Lzo file

(2), when building the table, please use LZO storage format

Hive Build table does not use LZO storage format, but the data is a problem in lzo format

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.