Background
Developed a Redis data import tool using C + +
Import all table data from Oracle into Redis;
Not just data import, the original records in each Oracle, need to be processed by business logic,
and add an index (Redis collection);
When the tool is finished, performance is a bottleneck;
Optimization effect
2 Sample Data tests were used:
Sample Data a table 8,763 records;
Table B 940,279 Records;
Before optimization, a table is time-consuming 11.417s;
After optimization, a table is time-consuming 1.883s;
Tools to use
Gprof, Pstrace,time
Use the time tool to see how time-consuming each execution is, including user and system time;
Use Pstrace to print real-time operation, query process main system call, find the time consuming point;
Use the time-consuming summary of the GPROF statistics program to focus on optimizing the most time-consuming areas;
Use introduction:
1. All editing and connectivity options for g++ must be added-PG (the first day, because there is no-PG option at the junction, resulting in no statistical report);
2. After the execution of the program, this catalogue will produce gmon.out documents;
3.gprof Redistool gmou.out >, generate readable file, open the most time-consuming function of the focus of the study;
Optimization process
Optimized before 11.417s:
Copy Code code as follows:
Time./redistool im a a.csv
Real 0m11.417s
User 0m6.035s
SYS 0m4.782s (discovery of system call time is too long)
File Memory Mappings
The system call time is too long, mainly is the file reads and writes, the preliminary consideration is reads the file, invokes the API frequency too frequently;
Read the sample is a file fgets a row of reading, using file memory mapping Mmap, can directly use the pointer to manipulate the entire file memory fast;
Log switch ahead
After improving the file reading and writing, found that the optimization effect is relatively limited (increased by about 2s); fgets is a C file read library function, compared to the system read (), is with the buffer, should not be too slow (people on the Web test, file memory mapping compared to fgets () can be faster than the previous order of magnitude, Feel the scene should be more special);
Then through the Pstrace tool found that log.dat open too many times, the original debug log switch is written to the back, resulting in the debug log will open the log file open ("Log.dat");
Put the log switch ahead; After the improvement, 3.53s
Copy Code code as follows:
Time./redistool im a a.csv
Real 0m3.530s
User 0m2.890s
SYS 0m0.212s
Vector space Pre-allocation
Subsequent through gprof analysis, a function of vector memory allocation more than a few times, and a number of replication times:
Improve the following line of code:
Vector <string> vsegment;
Use static vector variables and allocate memory in advance:
Copy Code code as follows:
static vector <string> vsegment;
Vsegment.clear ();
static int ncount = 0;
if (0 = ncount)
{
Vsegment.reserve (64);
}
++ncount;
After optimization, upgrade to 2.286s
Copy Code code as follows:
Real 0m2.286s
User 0m1.601s
SYS 0m0.222s
Similarly, the member vectors in another class also use a pre-allocated space (in a constructor):
M_vtpipecmd.reserve (256);
After optimization, the promotion to 2.166s;
Copy Code code as follows:
Real 0m2.166s
User 0m1.396s
SYS 0m0.204s
function rewriting && inline
Continue executing the program, discovering that the Sqtoolstrsplitbych () function consumes too much, overwrites the entire function logic, and then inline the rewritten function:
After optimization, upgrade to 1.937s
Copy Code code as follows:
Real 0m1.937s
User 0m1.301s
SYS 0m0.186s
Removing the debugger and optimizing the monitoring symbol
Finally, after removing debug and PG Debug symbols, the final effect is 1.883s;
Copy Code code as follows:
Real 0m1.883s
User 0m1.239s
SYS 0m0.191s
Meet Production requirements
The last few steps appear to be the millisecond level of ascension, expanded to the full table data, the effect is very obvious;
After optimization, the production of a table is 152w, the import time is about 326s (~6 minutes);
Table B data 420w, import time is about 1103s (~18 minutes)
The above mentioned is the entire content of this article, I hope you can enjoy.