Hive Data Skew problem
Problem Status: not resolved
- background: HDFs compresses the file and does not add an index. It is primarily developed with hive.
- Discovery:sqoop import data from MySQL, divide it evenly by ID, but the ID division and its uneven (I don't know how the business system got it). So the size of the file caused by reduce is very uneven, is called data skew.
- problem: write hql reads data from this table and finds the entire job is slow. Later, I looked up the log found that there are several map reading data is very slow, 1G of files about 1 hours to read the complete.
- problem Analysis: The file was compressed in Lzo format by Hadoop (Lzo format does not support cutting). OPS does not add an index to the file, so this 1G file must go one time for network I/O to read the file to the node where the map resides and then read it as a whole. So the map is very slow.
- Solution Ideas:
- The data in MySQL is imported into hive in batches, a high-density ID is split into a task, and a low-density ID is split into a task (I have 5 tasks in front and back, but the effect is significant, but still not achieving the desired effect). Control each file size at 1G.
- Divide large files into small, uniform files by setting hive parameters and splitting complex hql. It is still being solved ...
Hive Data Skew problem