Summary of my experience: MySQL database optimization tips _ MySQL

Source: Internet
Author: User
Tags mysql book dedicated server mysql gui intel pentium
Personal experience Summary: MySQL database optimization tips collection bitsCN.com

1. what can and should we optimize?


Hardware


Operating System/software library


SQL Server (set and query)


Application programming interface (API)


Applications


II. optimize hardware


If you need a huge database table (> 2 GB), you should consider using a 64-bit hardware structure, such as Alpha, or the upcoming IA64. Because MySQL uses a large number of 64-bit integers internally, the 64-bit CPU will provide better performance.


For large databases, the optimized order is generally RAM, fast hard disk, and CPU capacity.


More memory by storing the most commonly used keycode pages in the memory, you can speed up key code updates.


If you do not use a transaction-safe table or a large table and want to avoid long File Check, a UPS can safely shut down the system when a power failure occurs.


1G Ethernet should be considered for systems with data inventory placed on a dedicated server. Latency and throughput are equally important.


III. disk optimization


A dedicated disk is provided for the system, program, and temporary files. if a lot of modifications are made, update logs and transaction logs are placed on the dedicated disk.

Low track time is very important for database disks. For large tables, you can estimate that you will need log (number of rows)/log (index block length/3*2/(key code length + data pointer length )) + a row can be found only once. For a table with 500000 rows, you need to search for columns of the Mediun int type by log (500000)/log (1024/3*2/(3 + 2) + 1 = 4. The above index requires 500000*7*3/2 = MB space. In fact, most of the blocks will be cached, so we only need to find them one or two times.

However, for writing (as shown above), you will need four seek requests to find where to store the new key code. Generally, you need two seek requests to update the index and write a row.

For a very large database, your application will be limited by the disk seek speed. as the data volume increases, your application will increase at the N log N data level.

Divide databases and tables on different disks. In MySQL, you can use symbolic links for this purpose.

Disk (RAID 0) increases the read and write throughput.

A column with an image (RAID 0 + 1) is safer and increases the read throughput. The write throughput is reduced.

Do not use an image or RAID (except RAID 0) on the disk where temporary files or data that can be easily recreated are located ).

In Linux, run the hdparm-m16-d1 command on the disk during boot to enable simultaneous read/write of multiple sectors and DMA functions. This increases the response time by 5 ~ 50%.

In Linux, use async (default) and noatime to mount a disk ).

For some specific applications, memory disks can be used for some specific tables, but this is usually not required.

4. optimize the operating system


Do not swap areas. If the memory is insufficient, add more memory or configure your system to use less memory.

Do not use an NFS disk ).

Increase the number of files opened by the system and MySQL server. (Add ulimit-n # To The safe_mysqld script #).

Increase the number of processes and threads in the system.

If you have a relatively small number of large tables, tell the file system not to break files into different channels (Solaris ).

Use a file system (Solaris) that supports large files ).

Select the file system to use. On Linux, Reiserfs is very fast for opening, reading and writing. File Check takes only a few seconds.

5. select the application programming interface


PERL

It can be transplanted between different operating systems and databases.

Suitable for rapid prototyping.

You should use the DBI/DBD interface.

PHP

Easier to learn than PERL.

Use fewer resources than PERL.

You can get a faster speed by upgrading to PHP4.

C

Native Interface of MySQL.

Quickly and give more control.

Lower layer, so you must pay more.

C ++

Higher level, giving you more time to write applications.

Still under development

ODBC

Run on Windows and Unix.

It can be transplanted between almost different SQL servers.

Slow. MyODBC is just a simple straight-through driver, which is 19% slower than native interfaces.

There are many ways to do the same thing. It is difficult to run as many ODBC drivers, and there are still different errors in different fields.

Problems. Microsoft occasionally changes the interface.

An unclear future. (Microsoft prefers OLE instead of ODBC)

ODBC

Run on Windows and Unix.

It can be transplanted between almost different SQL servers.

Slow. MyODBC is just a simple straight-through driver, which is 19% slower than native interfaces.

There are many ways to do the same thing. It is difficult to run as many ODBC drivers, and there are still different errors in different fields.

Problems. Microsoft occasionally changes the interface.

An unclear future. (Microsoft prefers OLE instead of ODBC)

JDBC

Theoretically, it is possible to port data between databases in different operating systems.

It can run on the web client.

Python and others

It may be good, but we don't need them.

VI. optimize applications


Focus on solving the problem.

When writing an application, you should decide what is the most important:

Speed

Portability between operating systems

Portability between SQL Servers

Use continuous connections ..

Cache data in the application to reduce the load on the SQL Server.

Do not query unnecessary columns in the application.

Do not use SELECT * FROM table_name...

Test all parts of an application, but focus most of the effort on testing the overall application under the potentially worst-case load. By using a modular approach, you should replace the bottleneck with a fast "dumb module", and easily mark the next bottleneck.

If you make a large number of modifications in a batch, use lock tables. For example, you can combine multiple UPDATES or DELETES.

7. use portable applications


Perl DBI/DBD

ODBC

JDBC

Python (or other languages with common SQL interfaces)

You should only use SQL structures that exist in all target SQL servers or can easily use other constructor to simulate SQL structures. [Url] The Crash-me page on www.mysql.com can help you. [/Url]

Write a package program for the operating system or SQL Server to provide missing functions.

8. if you need a faster speed, you should:


Identify bottlenecks (CPU, disk, memory, SQL Server, operating system, API, or application) and focus on solving them.

It provides you with faster speed/flexibility scaling.

Gradually understand the SQL server so that you can use the fastest possible SQL structure for your problems and avoid bottlenecks.

Optimize table layout and query.

Use replication for faster select speeds.

If you have a slow network connection to the database, use the compressed client/server protocol.

Don't be afraid that the first version of the application cannot be transplanted perfectly. when you solve the problem, you can always optimize it later.


9. optimize MySQL


Select the compiler and compilation options.

Find the best startup options for your system.

Read MySQL Reference Manual and Paul DuBios's MySQL book. (The Chinese version already exists)

Use explain select, show variables, show status, and show processlist.

Understand how the query optimizer works.

Optimize the table format.

Maintain your TABLE (myisamchk, check table, optimize table)

Use the MySQL extension function to complete everything quickly.

If you notice that you will need some functions in many cases, write the MySQL UDF function.

Do not use table-level or column-level GRANT unless you do need it.

Purchase MySQL technical support to help you solve the problem :)

10. Compile and install MySQL


By selecting the best possible compiler for your system, you can generally achieve a performance improvement of 10-30%.

On Linux/Intel platform, use pgcc (gcc Pentium chip optimized version) to compile MySQL. However, binary code can only run on Intel Pentium CPU.

For a specific platform, use the optimization options recommended in the MySQL Reference Manual.

Generally, native compilers for a specific CPU (such as the Sun Workshop of the iSCSI) should provide better performance than gcc, but this is not always the case.

Compile MySQL with the character set you will use.

Static compilation generates the mysqld execution File (use -- with-mysqld-ldflags = all-static) and uses strip SQL/mysqld to organize the final execution file.

Note that since MySQL does not use C ++ extensions, compilation of MySQL without extensions won huge performance improvements.

If the operating system supports native threads, use native threads (instead of mit-pthreads ).

Use the MySQL benchmark to test the final binary code.

11. maintenance


If possible, run OPTIMIZE table occasionally, which is very important for changing the length of rows in a large number of updates.

Occasionally, use myisamchk-a to update the key code distribution statistics in the table. Remember to turn MySQL off before doing so.

If there are fragment files, it may be worthwhile to copy all the files to another disk, clear the original disk and copy back the files.

If any problem occurs, use myisamchk or CHECK table to CHECK the table.

Use mysqladmin-i10 precesslist extended-status to monitor the MySQL status.

With the MySQL GUI client program, you can monitor the process List and status in different windows.

Use mysqladmin debug to obtain information about locking and performance.

12. optimize SQL


Take advantage of SQL and hand over other tasks to applications. Use the SQL Server:


Find the rows based on the WHERE clause.

JOIN table

GROUP

ORDER

DISTINCT

Do not use SQL:


Test data (such as date)

Become a calculator

Tips:


Use the key code wisely.

The key code is suitable for searching, but not for inserting or updating index columns.

Keep data as the third paradigm of the database, but do not worry about redundant information or create a summary table if you need to speed up.

Do not use group by on a large table. Instead, create a summary table for a large table and query it.

UPDATE table set count = count + 1 where key_column = constant very fast.

For large tables, it may be better to generate a summary table occasionally instead of keeping the summary table all the time.

Make full use of the default values of INSERT.

13. speed differences between different SQL Servers (in seconds)


 


Read 2000000 lines by keycode: NT Linux

Mysql 367 249

Mysql_odbc 464

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.