Copy Code code as follows:
DELIMITER $$
Set @stmt = ' Select Userid,username from MyUser where userid between? and? ';
Prepare S1 from @stmt;
Set @s1 = 2;
Set @s2 = 100;
Execute S1 using @s1, @s2;
deallocate prepare S1;
$$
DELIMITER;
Write in this form of the query, you can replace the parameters at will, give the code people call preprocessing, I think this should be in the MySQL variable binding bar ... However, in the process of looking at the data I heard two voices, one is, MySQL is similar to the Oracle variable binding, but there is no practical role, that is, can only be convenient to write, can not improve efficiency, this argument in a few 09 years of posts see:
Http://www.itpub.net/thread-1210292-1-1.html
Http://cuda.itpub.net/redirect.php?fid=73&tid=1210572&goto=nextnewset
Another argument is that the variable binding in MySQL can really improve efficiency, this is the hope that there is wood, or to test themselves.
The test is in the local, the amount of data is small, the specific number does not have practical significance, but, can be used to illustrate some problems, the database version is Mysql-5.1.57-win32-free version.
In the database is not very familiar with the attitude of ^_^, the test process has taken a lot of detours, this article to the conclusion of the main, do not list the design process of the experiment, writing is not good, the article is a bit boring, write out is hope someone to shoot bricks, Because I have come to the conclusion that: Pretreatment in the case of cache without the efficiency of the implementation of the direct ... I'm not willing to accept the results of my experiment. If the preprocessing only in order to standardize query, so that the cache hit rate to improve the individual feel overqualified, I hope to have more understanding of the fact that people can point out what it looks like--newsilen
Experimental preparation
First file Normalquery.sql
Copy Code code as follows:
Set profiling=1;
Select * from MyTable where dictid = 100601000004;
Select Dictid from MyTable limit 1,100;
Select Dictid from MyTable limit 2,100;
/* from limit 1,100 to limit 100,100 here to omit duplicate code * *
......
Select Dictid from MyTable limit 100,100;
SELECT query_id,seq,state,10000*duration from information_schema.profiling into outfile ' d:/normalresults.csv ' FIELDS Terminated by ', ' LINES terminated by ' \ n ';
Second SQL file Stmtquery.sql
Copy Code code as follows:
Set profiling=1;
Select * from MyTable where dictid = 100601000004;
Set @stmt = ' Select dictid from MyTable limit?,? ';
Prepare S1 from @stmt;
Set @s = 100;
Set @s1 = 101;
Set @s2 = 102;
......
Set @s100 = 200;
Execute S1 using @s1, @s;
Execute S1 using @s2, @s;
......
Execute S1 using @s100, @s;
SELECT query_id,seq,state,10000*duration from information_schema.profiling into outfile ' d:/stmtresults.csv ' FIELDS Terminated by ', ' LINES terminated by ' \ n ';
Do a few small notes:
1. Set profiling=1; After you execute this statement, you can start from the Information_ Schema.profiling the details of the execution of the statement read out in this table actually contain a lot of content, including the time information I need, this is a temporary table, the profiling property must be reset for every new session to read data from this table
2. Select * from MyTable where dictid = 100601000004;
This line of code doesn't seem to have anything to do with our experiment, originally I also think so, add this sentence, is I in the previous groping found that there is a step in the implementation process is open table, if it is the first time to open a table, that time is quite long, so in the execution of the following statement, I'm going to run this line of code to open the Test table.
3. mysql default in the Information_schema.profiling table to save the history of the query is 15, you can modify the Profiling_history_size property to make adjustments, I hope he is bigger so that I can take out enough data, However, the maximum is only 100, although I adjusted to 150, the last to be able to find only 100, but also enough
4. SQL code I do not list all, because the query statement is similar, the above code in the ellipsis indicated, the final result is two CSV files, personal habits, you can also save the results to the database for analysis
Experiment steps
Restart the database, execute file normalquery.sql, execute file stmtquery.sql, get two result files
Restart the database, execute stmtquery.sql, execute file normalquery.sql, and get two other result files
Experimental results
Detailed results at the end of the provided attachment download, interested friends can look down
Result analysis
Each SQL file executes 100 query statements, no duplicate query statements, no query cache, and the average time to execute SQL results in the following
As can be seen from the results, the statements in Normalquery are faster than those used for preprocessing statements, whether executed first or later =.=!
Then look at each sentence. Query specific cases, normal and stmt query each executed 200 times, each step of the details are as follows:
From here can be seen, the first, normalquery than stmtquery one step, the second, although the stmt in a number of steps is superior to normal, but in executing one step lost too much, the final result is also defeated
Finally, the experimental results of a query cache are given, and the concrete steps are not listed.
In the query cache, the normal victory ...
Written in the last
This is probably the case, I recalled, Online said pretreatment can improve efficiency, the basic is to execute the query in a programmatic way, do not know whether this relationship, the foundation is limited, I hope the garden Daniel can see, help dispel doubts
Experimental results attachment
MySQL pretreatment experiment Results