Today, database operations are increasingly becoming a performance bottleneck for the entire application, which is especially noticeable for web applications. It's not just about the performance of the database that DBAs need to worry about, it's something that our programmers need to focus on. When we design the database table structure, we need to pay attention to the performance of the data operation when we operate the database, especially the SQL statements when we look at the table. Here, we're not going to talk too much about optimizations for SQL statements, but only for MySQL, the most Web application database. Hopefully the following optimization tips are useful for you.
1. Optimize your query for query caching
Most MySQL servers have query caching turned on. This is one of the most effective ways to improve sex, and this is handled by the MySQL database engine. When many of the same queries are executed multiple times, the results of these queries are placed in a cache so that subsequent identical queries do not have to manipulate the table directly to access the cached results.
The main problem here is that this is a very easy thing to ignore for programmers. Because, some of our query statements will let MySQL not use the cache. Take a look at the following example:
The query cache does not open $r = mysql_query ("Select username from user WHERE signup_date >= curdate ()"); Open Query Cache $today = Date ("y-m-d"); $r = mysql_query ("Select username from user WHERE signup_date >= ' $today '");
The difference between the two SQL statements above is curdate (), and the MySQL query cache does not work for this function. Therefore, SQL functions such as now () and RAND () or whatever, do not turn on the query cache because the return of these functions is variable. So all you need to do is use a variable instead of the MySQL function to turn on the cache.
2. EXPLAIN your SELECT query
Use the EXPLAIN keyword to let you know how MySQL handles your SQL statements. This can help you analyze the performance bottlenecks of your query statement or table structure.
EXPLAIN's query results will also tell you how your index primary key is being leveraged, how your data tables are searched and sorted ... Wait, wait.
Pick one of your SELECT statements (it is recommended to pick one of the most complex, multi-table joins) and add the keyword explain to the front. You can use phpMyAdmin to do this. Then, you'll see a table. In the following example, we forget to add the group_id index and have a table join:
When we index the group_id field:
As we can see, the previous result shows a search of 7883 rows, and the second one searches only 9 and 16 rows of two tables. Looking at the rows column allows us to find potential performance issues.
3. Use LIMIT 1 when only one row of data is used
When you query a table, you already know that the result will only have one result, but because you might need to fetch the cursor, or you might want to check the number of records returned.
In this case, adding LIMIT 1 can increase performance. This way, the MySQL database engine stops searching after it finds a piece of data, instead of continuing to look for the next record-compliant data.
The following example, just to find out if there are users of "China", it is obvious that the latter will be more efficient than the previous one. (Note that the first one is select *, and the second is select 1)
Inefficient: $r = mysql_query ("SELECT * from user WHERE country = ' China '"); if (mysql_num_rows ($r) > 0) {//...}//efficient: $r = mysql_query ("Select 1 from user WHERE country = ' China ' LIMIT 1"); if (mysql_num_rows ($r) > 0) {//...}
4. Storage Engine Optimization
MySQL supports different storage engines, mainly using MyISAM and InnoDB.
4.1 MyISAM
MyISAM Manage non-transactional tables. It provides high-speed storage and retrieval, as well as full-text search capabilities. MyISAM is supported in all MySQL configurations, it is the default storage engine, unless you configure MySQL to use a different engine by default.
4.1.1 MyISAM Characteristics
4.1.1.1 MyISAM Properties
1) Do not support transactions, downtime destroy table
2) Use a smaller amount of memory and disk space
3) Table-based locks, which can cause serious performance problems when updating data concurrently
4) MySQL caches only index, data is cached by OS
4.1.1.2 Typical MyISAM usages
1) Log System
2) read-only or mostly read-operation applications
3) Full table scan
4) Bulk Import data
5) Low concurrent read/write with no transaction
4.1.2 MyISAM Optimization Essentials
1) The Declaration column is not NULL and can reduce disk storage.
2) Use optimize table to defragment and reclaim free space. Note that only after very large data changes are run.
3) It is forbidden to use index when deleting/updating/adding large amounts of data. Use the Alter TABLE T DISABLE KEYS.
4) Set myisam_max_[extra]_sort_file_size large enough to significantly increase the speed of the repair table.
4.1.3 MyISAM Table Locks
1) Avoid concurrent insert,update.
2) You can use insert delayed, but it is possible to lose data.
3) Refine the query statement.
4) Horizontal partitioning.
5) Vertical partitioning.
6) If all does not work, use InnoDB.
4.1.4 MyISAM Key Cache
1) Set key_buffer_size variable. Myisan's primary cache setting, which caches the index data for a MyISAM table, affects only MyISAM. You typically set 25-33% memory size in a server that uses only MyISAM.
2) You can use several different key Caches (for some hot data).
A) SET GLOBAL test.key_buffer_size=512*1024;
b) CACHE INDEX t1.i1, t2.i1, T3 in test;
2) preload index to cache can improve query speed. Because preloading index is sequential, so it's very fast.
A) LOAD INDEX into CACHE t1, T2 IGNORE LEAVES;
4.2 InnoDB
InnoDB provides MySQL with a transaction-safe (acid-compatible) storage engine with Commit, rollback, and crash resiliency. InnoDB provides row level lock and also provides an Oracle-style, non-locking read in the SELECT statement. These features add to multi-user deployment and performance. There is no need to widen the lock in the InnoDB because the row level lock is suitable for very small spaces in InnoDB. InnoDB also supports foreign key constraints. In SQL queries, you are free to mix tables of the InnoDB type with other MySQL table types, even in the same query.
InnoDB is designed to maximize performance when dealing with large amounts of data. It has very high CPU usage.
The InnoDB storage engine is fully integrated with the MySQL server, and the InnoDB storage engine maintains its own buffer pool to cache data and indexes in memory. InnoDB stores its tables & indexes in a table space, a tablespace can contain several files (or raw disk partitions). This is different from the MyISAM table, such as in the MyISAM table where each table is in a separate file. The InnoDB table can be any size, even if the file size is limited to 2GB on the operating system.
Many large database sites that require high performance use the InnoDB engine. The famous Internet news site slashdot.org runs on InnoDB. Mytrix, Inc. stores more than 1TB of data on InnoDB, and some other sites handle an average of 800 insertions/updates per second on InnoDB.
4.2.1 InnoDB characteristics
4.2.1.1 InnoDB Properties
1) Support transaction, ACID, foreign key.
2) Row level locks.
3) support for different isolation levels.
4) requires more memory and disk space than the MyISAM.
5) No key compression.
6) Data and indexes are in memory hash table.
4.2.1.2 InnoDB Good for
1) applications that require transactions.
2) high-concurrency applications.
3) Automatic recovery.
4) faster operation based on primary key.
4.2.2 InnoDB Optimization Essentials
1) Try to use Short,integer's primary key.
2) Press primary key order when Load/insert data. If the data is not sorted by primary key, then the database operation is sorted first.
3) The load data is for setting the set Unique_checks=0,set foreign_key_checks=0, which avoids the overhead of foreign key and uniqueness constraint checks.
4) Use prefix keys. Because InnoDB does not have a key compression function.
4.2.3 InnoDB server-side settings
Innodb_buffer_pool_size: This is the most important setting for InnoDB, which has a decisive impact on InnoDB performance. The default setting is only 8M, so the default database setting below InnoDB performance is poor. You can set 60-80% memory on a database server that has only the InnoDB storage engine. To be more precise, set the memory size to 10% larger than InnoDB tablespaces under the memory capacity allowed.
Innodb_data_file_path: Specifies the space for table data and index storage, which can be one or more files. The last data file must be automatically expanded, and only the last file will allow automatic expansion. Thus, when the space is exhausted, the auto-expansion data file will automatically grow (in 8MB) to accommodate the additional data. For example: Innodb_data_file_path=/disk1/ibdata1:900m;/disk2/ibdata2:50m:autoextend two data files on different disks. The data is first placed in the ibdata1, and when it reaches 900M, the data is placed in the IBDATA2. Once the 50MB,IBDATA2 is reached, it will automatically grow in 8MB units. If the disk is full, you need to add a data file to the other disk.
Innodb_autoextend_increment: The default is 8M, if the amount of insert data is more, you can increase it appropriately.
Innodb_data_home_dir: Place table space Data directory, default in MySQL data directory, set to and MySQL installation file different partition can improve performance.
Innodb_log_file_size: This parameter determines the recovery speed. Too big words recovery will be slow, too small to affect the query performance, generally take 256M to balance the performance and recovery speed.
Innodb_log_buffer_size: Disk speed is very slow, directly log writes to the disk will affect the performance of InnoDB, this parameter sets the size of the log buffer, generally 4M. If you have a large blob operation, you can increase it appropriately.
innodb_flush_logs_at_trx_commit=2: This parameter sets the processing of log information in memory when the transaction commits.
1) = 1 o'clock, when each transaction commits, the log buffer is written to the log file, and the log file is refreshed with the disk operation. Truly ACID. Slow speed.
2) = 2 o'clock, when each transaction commits, the log buffer is written to the file, but the log file is not refreshed with disk operations. Only the operating system crashes or loses power to delete the last second of the transaction, or the transaction will not be lost.
3) = 0 o'clock, the log buffer is written to the log file once per second, and the log file is refreshed with disk operations. The crash of any mysqld process will delete the last second of the transaction before the crash
Innodb_file_per_table: You can store each InnoDB table and its index in its own file.
Transaction-isolation=read-comitted: If the application can run at the Read-commited isolation level, this setting will have some performance gains.
Innodb_flush_method: How to set InnoDB Sync io:
1) default– use Fsync ().
2) O_sync open files in SYNC mode, usually slower.
3) O_direct, use DIRECT IO on Linux. Can significantly increase the speed, especially on RAID systems. Avoid additional data duplication and double buffering (MySQL buffering and OS buffering).
Innodb_thread_concurrency:innodb kernel The maximum number of threads.
1) Minimum set to (Num_disks+num_cpus) * *.
2) You can disable this restriction by setting the 1000来 to
5. Use a fairly typed example in the Join table and index it
If your application has many join queries, you should confirm that the fields of join in two tables are indexed. In this way, MySQL internally initiates the mechanism for you to optimize the SQL statement for join.
Also, the fields that are used for join should be of the same type. For example, if you want to join a DECIMAL field with an INT field, MySQL cannot use its index. For those string types, you also need to have the same character set. (Two tables may not have the same character set)
Look in the state for company $r = mysql_query ("Select Company_Name from the users left JOIN companies on (users.state = Companies.state WHERE users.id = $user _id "); The two state fields should be indexed and should be of the same type, with the same character set.
6. Never ORDER by RAND ()
Want to disrupt the data rows returned? Pick a random data? I don't know who invented this usage, but many novices like it. But you do not understand how horrible the performance problem is.
If you really want to disrupt the data rows that you return, there are n ways you can achieve this. This use only degrades the performance of your database exponentially. The problem here is that MySQL will have to execute the rand () function (which consumes CPU time), and this is done for each row of records to be recorded and then sorted. Even if you use limit 1 is useless (because to sort)
The following example randomly picks a record
Never do this: $r = mysql_query ("Select username from the user ORDER by RAND () LIMIT 1"); This would be better: $r = mysql_query ("SELECT count (*) from user"); $d = Mysql_fetch_row ($r); $rand = Mt_rand (0, $d [0]-1); $r = mysql_query ("Select username from user LIMIT $rand, 1");
7. Avoid SELECT *
The more data you read from the database, the slower the query becomes. And, if your database server and Web server are two separate servers, this also increases the load on the network transport.
So, you should develop a good habit of taking whatever you need.
$r = mysql_query ("SELECT * from user WHERE user_id = 1") is not recommended; $d = Mysql_fetch_assoc ($r); echo "Welcome {$d [' username ']}"; Recommended $r = mysql_query ("Select username from user WHERE user_id = 1"); $d = Mysql_fetch_assoc ($r); echo "Welcome {$d [' username ']}";
8. Always set an ID for each table
We should set an ID for each table in the database as its primary key, and the best is an int type (recommended to use unsigned), and set the automatically added Auto_increment flag.
Even if you have a field in the users table that has a primary key called "email", you don't have to make it a primary key. Use the VARCHAR type to degrade performance when the primary key is used. In addition, in your program, you should use the ID of the table to construct your data structure.
Also, under the MySQL data engine, there are some operations that need to use primary keys, in which case the performance and settings of the primary key become very important, such as clustering, partitioning ...
In this case, there is only one exception, which is the "foreign key" of the "association table", that is, the primary key of the table, which consists of the primary key of several other tables. We call this the "foreign key". For example: There is a "student table" has a student ID, there is a "curriculum" has a course ID, then, "Score table" is the "association table", which is associated with the student table and curriculum, in the score table, student ID and course ID is called "foreign key" it together to form a primary key.
9. Use ENUM instead of VARCHAR
The ENUM type is very fast and compact. In fact, it holds the TINYINT, but it appears as a string on its appearance. In this way, using this field to make a list of options becomes quite perfect.
If you have a field such as "gender", "Country", "nation", "state" or "department", you know that the values of these fields are limited and fixed, then you should use ENUM instead of VARCHAR.
MySQL also has a "suggestion" (see article tenth) to show you how to reorganize your table structure. When you have a VARCHAR field, this suggestion will tell you to change it to an ENUM type. With PROCEDURE analyse () you can get advice.
10. Obtaining recommendations from PROCEDURE analyse ()
PROCEDURE analyse () will let MySQL help you analyze your fields and their actual data, and will give you some useful advice. These suggestions will only become useful if there is actual data in the table, because it is necessary to have data as a basis for making some big decisions.
For example, if you create an INT field as your primary key, but there is not much data, then PROCEDURE analyse () suggests that you change the type of the field to Mediumint. Or you use a VARCHAR field, because there is not much data, you might get a suggestion that you change it to an ENUM. These suggestions are probably because the data is not enough, so the decision-making is not accurate.
In phpMyAdmin, you can view these suggestions by clicking "Propose table Structure" while viewing the table.
It is important to note that these recommendations only become accurate when the data in your table is getting more and more. Be sure to remember that you are the one who will make the final decision.
11. Use not NULL where possible
Unless you have a very special reason to use null values, you should always keep your fields not NULL. This may seem a bit controversial, please look down.
First, ask yourself how big the difference is between "Empty" and "null" (if it's int, that's 0 and null)? If you feel that there is no difference between them, then you should not use NULL. (Do you know?) In Oracle, NULL and Empty strings are the same! )
Do not assume that NULL does not require space, that it requires extra space, and that your program will be more complex when you compare it. Of course, this is not to say that you cannot use NULL, the reality is very complex, there will still be cases where you need to use a null value.
Here is an excerpt from MySQL's own documentation:
"NULL columns require additional space in the row to record whether their values is null. For MyISAM tables, each of the NULL column takes one bit extra, rounded up to the nearest byte. "
Prepared statements
Prepared statements is much like a stored procedure, a collection of SQL statements running in the background, and we can derive many benefits from using Prepared statements, whether it's a performance issue or a security issue.
Prepared statements can check some of the variables you've bound so that you can protect your program from "SQL injection" attacks. Of course, you can also manually check these variables, however, manual checks are prone to problems and are often forgotten by programmers. When we use some framework or ORM, this problem is better.
In terms of performance, this gives you a considerable performance advantage when the same query is used multiple times. You can define some parameters for these prepared statements, and MySQL will parse only once.
While the latest version of MySQL in the transmission prepared statements is using the binary situation, this makes the network transfer very efficient.
Of course, there are some cases where we need to avoid using prepared statements because it does not support query caching. But it is said that after version 5.1 was supported.
To use prepared statements in PHP, you can view its user manual: Mysqli extension or using the database abstraction layer, such as PDO.
Create PREPARED statement if ($stmt = $mysqli->prepare ("Select username from user WHERE state=?")) {//bind parameter $stmt->bind_param ("s", $state);//execute $stmt->execute ();//bind result $stmt->bind_result ($username);//Move cursor $stmt->fetch (); printf ("%s is from%s\n", $username, $state); $stmt->close (); }
13. Non-buffered queries
Normally, when you execute an SQL statement in your script, your program will stop there until the SQL statement is returned, and your program continues to execute. You can use unbuffered queries to change this behavior.
In this case, there is a very good description in the PHP Documentation: Mysql_unbuffered_query () function:
"Mysql_unbuffered_query () sends the SQL query query to MySQL without automatically fetching and buffering the result rows As mysql_query () does. This saves a considerable amount of memory with SQL queries that produce large result sets, and can start working on t He result set immediately after the first row had been retrieved as you don ' t had to wait until the complete SQL query ha s been performed. "
The above sentence translates to say that mysql_unbuffered_query () sends an SQL statement to MySQL instead of automatically fethch and caches the results like mysql_query (). This can save a lot of considerable memory, especially those that produce a lot of results, and you don't have to wait until all the results are returned, and you can start working on the query results as soon as the first row of data is returned.
However, there are some limitations. Because you either read all the lines, or you want to call Mysql_free_result () to clear the results before making the next query. Also, mysql_num_rows () or Mysql_data_seek () will not work. So, you need to think carefully about whether to use unbuffered queries.
14. Save the IP address as UNSIGNED INT
Many programmers create a VARCHAR (15) field to hold IP in the form of a string rather than a shaped IP. If you use plastic to store it, you only need 4 bytes, and you can have a fixed-length field. And, this will bring you the advantage of querying, especially when you need to use such a where condition: IP between Ip1 and IP2.
We must use unsigned INT because the IP address uses an entire 32-bit unsigned shaping.
Instead of your query, you can use Inet_aton () to turn a string IP into a shape, and use Inet_ntoa () to turn an integer into a string IP. In PHP, there are also functions such as Ip2long () and Long2ip ().
$r = "UPDATE users SET IP = Inet_aton (' {$_server[' remote_addr ']} ') WHERE user_id = $user _id";
15. Fixed-length tables are faster
If all the fields in the table are fixed length, the entire table is considered "static" or "Fixed-length". For example, there are no fields of the following type in the table: Varchar,text,blob. As long as you include one of these fields, the table is not a fixed-length static table, so the MySQL engine will handle it in a different way.
Fixed-length tables can improve performance because MySQL searches faster because these fixed lengths are easy to calculate the offset of the next data, so the nature of reading will be fast. And if the field is not fixed, then every time you want to find the next one, you need the program to find the primary key.
Also, fixed-length tables are more likely to be cached and rebuilt. However, the only side effect is that a fixed-length field wastes some space, because the field is set to allocate so much space whether you use it or not.
Using the "vertical split" technique (see the next one), you can split your table into two that are fixed-length and one that is indefinite.
16. Vertical Segmentation
"Vertical Segmentation" is a method of turning a table in a database into several tables, which reduces the complexity of the table and the number of fields for optimization purposes. (Previously, in a bank project, saw a table with more than 100 fields, very scary)
Example One : One of the fields in the Users table is the home address, which is an optional field, and you do not need to read or rewrite this field frequently in addition to your personal information when working in a database. So, why not put him in another table? This will make your table better performance, we think is not, a lot of time, I for the user table, only the user ID, user name, password, user role, etc. will be used frequently. A smaller table will always have good performance.
Example Two : You have a field called "Last_login" that will be updated every time the user logs in. However, each update causes the table's query cache to be emptied. So, you can put this field in another table, so that you do not affect the user ID, user name, user role of the constant read, because the query cache will help you to add a lot of performance.
In addition, you need to note that these separated fields form the table, you do not regularly join them, otherwise, this performance will be worse than not split, and, it will be a drop of magnitude.
17. Splitting a large DELETE or INSERT statement
If you need to perform a large DELETE or INSERT query on an online website, you need to be very careful to avoid your actions to keep your entire site from stopping accordingly. Because these two operations will lock the table, the table is locked, the other operations are not in.
Apache will have a lot of child processes or threads. So, it works quite efficiently, and our servers don't want to have too many child processes, threads and database links, which is a huge amount of server resources, especially memory.
If you lock your watch for a period of time, say 30 seconds, for a site with a high level of access, the 30-second cumulative number of access processes/threads, database links, and open files may not only allow you to park the Web service crash, but may also leave your entire server hanging up.
So, if you have a big deal, you make sure you split it, using the LIMIT condition is a good way. Here is an example:
while (1) {//do only 1000 mysql_query at a time ("DELETE from logs WHERE log_date <= ' 2009-11-01 ' LIMIT"); if (Mysql_affected_r OWS () = = 0) {//No can be deleted, exit! Break }//Take a break every time usleep (50000); }
18. The smaller the column the faster
For most database engines, hard disk operations can be the most significant bottleneck. So it's very helpful to have your data compact, because it reduces access to the hard drive.
See MySQL documentation Storage Requirements View all data types.
If a table has only a few columns (for example, a dictionary table, a configuration table), then we have no reason to use INT to master the keys, using Mediumint, SMALLINT or smaller TINYINT will be more economical. If you don't need to record time, using date is much better than DATETIME.
Of course, you also need to leave enough space for expansion, otherwise, you do this later, you will die very difficult to see, see Slashdot example (November 06, 2009), a simple ALTER TABLE statement took 3 hours, because there are 16 million data.
19. Choose the right storage engine
There are two storage engines MyISAM and InnoDB in MySQL, each with a few pros and cons. Cool Shell before the article "Mysql:innodb or MyISAM?" Discussion and this matter.
MyISAM is suitable for applications that require a large number of queries, but it is not very good for a lot of write operations. Even if you just need to update a field, the entire table will be locked and other processes will be unable to manipulate the read process until the read operation is complete. In addition, MyISAM's calculations for SELECT COUNT (*) are extremely fast.
The InnoDB trend will be a very complex storage engine, and for some small applications it will be slower than MyISAM. He is it supports "row lock", so in the writing operation more time, will be more excellent. Also, he supports more advanced applications, such as: transactions.
Here's the MySQL manual.
- target= "_blank" MyISAM Storage Engine
- InnoDB Storage Engine
20. Using an Object-relational mapper (relational Mapper)
With ORM (Object relational Mapper), you can gain reliable performance gains. All the things an ORM can do, can be written manually. However, this requires a senior expert.
The most important thing about ORM is "Lazy Loading", that is to say, only when the need to take the value of the time to really do. But you also need to be careful about the side-effects of this mechanism, because this is likely to degrade performance by creating many, many small queries.
ORM can also package your SQL statements into a single transaction, which is much faster than executing them alone.
Currently, the personal favorite of PHP's ORM is: Doctrine.
21. Be careful with "permalink"
The purpose of the permanent link is to reduce the number of times the MySQL link is recreated. When a link is created, it will always be in a connected state, even if the database operation is finished. And since our Apache has started reusing its child processes-that is, the next HTTP request will reuse Apache's subprocess and reuse the same MySQL link.
22, sub-database sub-table
Obviously, a main table (that is, very important tables, such as user tables) unlimited growth is bound to seriously affect performance, sub-library and sub-table is a very good solution, that is, the performance optimization approach, now the case is that we have a 1000多万条 record of the user table members, query very slow, A colleague's approach is to hash it into 100 tables, from Members0 to Members99, and then distribute the records to these tables based on the mid, which is probably what the awesome code looks like:
<?php for ($i =0; $i < $i + +) {//echo "CREATE TABLE db2.members{$i} like db1.members<br>"; echo "INSERT into members{$i} SELECT * from the members WHERE mid0={$i}<br> "; }?>
23. Modify MySQL table structure without downtime
Also the members table, the design of the table structure is not reasonable, with the database constantly running, its redundant data is also huge growth, colleagues used the following methods to deal with:
Create a temporary table first:
CREATE TABLE members_tmp like members
Then modify the MEMBERS_TMP table structure for the new structure, and then use the above for loop to export the data, because 10 million of the data one-time export is not right, mid is the primary key, an interval of an interval of the guide, basically is an export 50,000 bar, here omitted
Then rename the new table to replace it:
RENAME TABLE members_bak,members_tmp to members;
That is, the basic can be done without loss, no downtime to update the table structure, but in fact, the Rename period table is locked, so the choice of less online when the operation is a skill. After this operation, so that the original 8G more than a table, suddenly become more than 2G
In addition, I also talked about the strange phenomenon of the type of float field in MySQL, that is, the numbers you see in the PMA simply cannot be queried as a condition. Thanks for the fresh sharing of ZJ classmates.
24. mysql Performance optimization parameters
Company website Access is growing, MySQL naturally become a bottleneck, so recently I have been studying MySQL optimization, the first step is naturally think of MySQL system parameters optimization, as a large number of visitors to the site (more than 200,000 days) database system, can not count on MySQL The default system parameters allow MySQL to run very smoothly.
I think the following system parameters are critical by looking up data on the web and trying it yourself:
(1), Back_log:
The number of connections required for MySQL to be available. When the primary MySQL thread gets very many connection requests in a very short time, this works, and then the main thread takes some time (albeit very short) to check the connection and start a new thread.
The Back_log value indicates how many requests can be present in the stack for a short period of time before MySQL temporarily stops answering a new request. Only if you expect to have a lot of connections in a short period of time, you need to increase it, in other words, the size of the listening queue for incoming TCP/IP connections. Your operating system has its own limits on this queue size. Attempting to set a limit of back_log above your operating system will be invalid.
When you look at your host process list, you find a lot of 264084 | Unauthenticated user | xxx.xxx.xxx.xxx | NULL | Connect | NULL | Login | NULL to connect the process, it is necessary to increase the value of Back_log. The default value is 50, I change it to 500.
(2), Interactive_timeout:
The number of seconds the server waits for an action on an interactive connection before shutting it down. An interactive customer is defined as a customer who uses the client_interactive option for Mysql_real_connect (). The default value is 28800, I change it to 7200.
(3), Key_buffer_size:
The index block is buffered and shared by all threads. Key_buffer_size is the buffer size used for the index block, increasing the index (for all reads and multiple writes) that can be better processed, to the extent that you can afford it. If you make it too big, the system will start to change pages and really become slow. The default value is 8388600 (8M), my MySQL host has 2GB memory, so I changed it to 402649088 (400MB).
(4), Max_connections:
The number of simultaneous customers allowed. Increase this value to increase the number of file descriptors required by the mysqld. This number should be increased, otherwise you will often see Too many connections error. The default value is 100, I change it to 1024.
(5), Record_buffer:
Each thread that makes a sequential scan allocates a buffer of that size for each table it scans. If you do a lot of sequential scans, you may want to increase the value. The default value is 131072 (128K), I change it to 16773120 (16M)
(6), Sort_buffer:
Each thread that needs to be sorted allocates a buffer of that size. Increase this value to accelerate the order by or group by operation. The default value is 2097144 (2M), and I change it to 16777208 (16M).
(7), Table_cache:
The number of tables opened for all threads. Increasing this value can increase the number of file descriptors required by MySQL. MySQL requires 2 file descriptors for each unique open table. The default value is 64, I change it to 512.
(8), Thread_cache_size:
The number of threads that can be reused for saving in. If there is, a new thread is obtained from the cache, and if there is space when disconnected, the customer's line is placed in the cache. If there are many new threads, the value of this variable can be increased in order to improve performance. By comparing the variables of the Connections and threads_created states, you can see the effect of this variable. I set it to 80.
(10), Wait_timeout:
The number of seconds the server waits for an action on a connection before shutting it down. The default value is 28800, I change it to 7200.
Note: Parameters can be adjusted by modifying the/etc/my.cnf file and restarting the MySQL implementation. This is a relatively cautious work, the above results are just some of my views, you can be based on the hardware of your own host (especially memory size) to further modify. Asp?id=482″width=1 border=0>
25. Jianjian Index for search words
MySQL performance optimization----MySQL performance optimization Essentials 25