Cause: work is required. I need to export a part of data from hbase every five minutes and import it to ES. However, in the initial stage, the Python script is compiled, I found that reading data from hbase is slow, which takes a lot of time and affects the entire derivative process. I am afraid I cannot complete the derivative work within five minutes.
After consulting for the elderly, the Department optimization strategy was adopted and the experiment results were recorded.
Hbase results are roughly as follows:
Fan table
Rowkey is the fan ID.
| Column name |
Description |
| ID |
Fan ID |
| UT |
Update Time |
| ... |
... |
| |
|
This hadoop cluster has 13 machines
The target of the task is to input the data entered in hbase for the first five minutes to es.
1. In order to enable fast development, I first started reading data through the thrift interface in Python, which obviously took a lot of time.
In this experiment, the UT column (Update time field) is used since data extraction)
2. Use Java client + simplecolumnvaluefilter to extract data
Using the thrift interface is very slow. It is said that the thrift server is equivalent to hbase client only performing one more data forwarding. It should not be so slow, but the reality is so cruel.
3. Because hbase records timestamp when inserting data, you can directly use timestamp to extract data (at least narrowing the scan search range)
4. 5. Reduce the time range of the derivative. We can see that the decrease of time is not linear. I infer that some of the time overhead of scan operations is the basic time overhead. If you import much data, the time will not decrease much.
6. After mapreduce is adopted, the speed has doubled. It is found that normally, the data extracted from hbase through hbase client is linear. After sending a request to a region server, the request is sent to another region Sever, obviously, the parallel map reduce speed is much higher than the serial speed.
By the end of phase 6, the business needs have been met. It is said that you can also increase the speed by splitting region. If you have time, try again.
Data read optimization policies from hbase and experiment comparison results