Prime_dsc_mentioncalcspark System Introduction
Implementation function: According to the conditions (siteId, StartTime, EndTime, Campaignid, folder) from the HBase data source read the text data as input, the submitted keyword as a condition, output in the text keyword mentions the number of times
There is a problem: the computation time for large amounts of data is longer.
Solution Ideas:
Construct Tweetbean by reflecting the hbase result into a Tweetbean modified to Tweetbean setxxx
When there are 5 W data, it takes 60s through reflection to turn into Tweetbean, and the method of Tweetbean SETXX requires 20s
Change all fields that read HBase to the required fields to read HBase
When there are 5 W data, it takes 60s to read all the fields and requires 25s to read the required fields.
When taking DC data from UC, instead of using the map function, replace it with the Mappartition function, which allows you to bulk fetch data from hbase and only need one hbase connection at a time
To store the computed results, use the Foreachpartition function. Instead of storing the results in a loop every time the iterator is traversed, the queue is maintained outside the loop, and the results are stored in batches
According to spark cluster resources, the resources of spark cluster can be rationally utilized, such as the more resources, the more the cluster computing power is. The more reasonable relationship between machine resources and task parallelism is: number of tasks = number of machines CPU cores * (2 or 3), so the number of partitions configured for RDD is cluster CPU cores * 2
The degree of parallelism in reading data from HBase is related to the number of region of the table. By default, there is only one region when you build a table, and when region becomes larger, you need to split into two region,region the more split thresholds, the greater the Causes a lot of data to exist in a region. If you need to query a table, assuming that the table has 5 region, 5 threads will query 5 region data at the same time, but if one of them is large, it is 10 times times that of the others, Then the region reads 10 times times the read performance of the others, resulting in a delay of the entire task. The solution to this problem can be achieved by pre-partitioning and rowkey using algorithms such as HASH/MD5 to distribute the data evenly in each region, so that data can be read evenly according to the data distribution and better concurrently read data.
Prime_dsc_mentioncalcspark Performance Tuning