Redis Data Import Tool optimization Process summary background
Developed a Redis data import tool using C + +
Import all table data into Redis from Oracle;
Not pure data import, the original records in each Oracle need to be processed by the business logic,
and add an index (Redis collection);
When the tool is finished, performance is a bottleneck;
Optimization effect
2 Sample Data tests were used:
Sample Data a table 8,763 records;
b table 940,279 Records;
Before optimization, a watch time is 11.417s;
After optimization, the A-meter time consuming 1.883s;
Tools to use
Gprof, Pstrace,time
Use the time tool to see how long each execution takes, including user and system time, respectively;
Use Pstrace Print to run in real time, query the main system call of the process, discover time-consuming point;
Use the time-consuming summary of the GPROF statistics program to focus on optimizing the most time-consuming areas;
Introduction to use:
1. All editing and connection options for g++ must be added-PG (the first day because there is no-PG option at the connection, resulting in no statistical report);
2. After executing the procedure, this catalogue will produce gmon.out files;
3.gprof Redistool gmou.out > Report, generate a readable file report, open the report centralized optimization of the most time-consuming function;
Optimization process
11.417s before optimization:
time ./redistool im a a.csvreal 0m11.417suser 0m6.035ssys 0m4.782s (发现系统调用时间过长)
File Memory Mapping
The system call time is too long, mainly is the file reads and writes, the preliminary consideration is reads the file, calls the API frequency too frequently;
Read the sample is a file fgets line of reading, using the file memory map Mmap, you can directly use the pointer operation of the entire file memory fast;
Log switch in advance
Improved the file read and write, found that the optimization effect is relatively limited (increased by about 2s); fgets is a C file read library function, compared to the system read (), is a buffer, should not be too slow (online test, file memory mapping compared to fgets () can be faster than the order of magnitude, Feel the scene should be more special);
Later through the Pstrace tool found Log.dat open too many times, the original debug log switch written to the back, resulting in the debug log will open the log file opened ("Log.dat");
To advance the log switch; after the improvement, 3.53s
time ./redistool im a a.csvreal 0m3.530suser 0m2.890ssys 0m0.212s
Vector space Pre-allocation
Later through GPROF analysis, a function of vector memory allocation of many times, and there are many replication times:
Improve the following line of code:
vector <string> vSegment;
Use static vector variables and pre-allocate memory:
static vector <string> vSegment;vSegment.clear();static int nCount = 0;if( 0 == nCount){ vSegment.reserve(64);}++nCount;
improved to 2.286s after optimization
real 0m2.286suser 0m1.601ssys 0m0.222s
Similarly, a member vector in another class uses the pre-allocated space (in the constructor):
m_vtPipecmd.reserve(256);
improved to 2.166s after optimization;
real 0m2.166suser 0m1.396ssys 0m0.204s
function rewrite && inline
Continue executing the program, discovering that the Sqtoolstrsplitbych () function is consuming too much, overwriting the entire function logic, and then inline the rewritten function:
improved to 1.937s after optimization
real 0m1.937suser 0m1.301ssys 0m0.186s
Remove debugger and optimize monitoring symbols
Finally, after the debug and PG debugging symbols are removed, the final effect is 1.883s;
real 0m1.883suser 0m1.239ssys 0m0.191s
Meet Production requirements
The last steps above appear to be the millisecond level of ascension, the effect is obvious after enlarging to the whole table data;
After optimization, the production of a table is 152w, the import time-consuming about 326s (minutes);
b table Data 420w, import time is about 1103s (~18 minutes)
Posted by: Big CC | 28jun,2015
Blog: blog.me115.com [Subscribe]
Github: Big cc
Redis Data Import Tool Optimization Process summary