MySQL Administrator collection: Top Ten essential tool counts

Source: Internet
Author: User
Tags benchmark svn percona

Dongdongzzcs

The 1th page:

"IT168 Zhuangao" the author of this article, Daniel Nichter, is the developer of the MySQL tool, who recommended 10 essential tools for MySQL administrators. The following is the full text content:

MySQL is a complex system that requires a large number of assistive tools to repair, diagnose, and optimize. Fortunately for administrators, the high popularity of MySQL has attracted a large number of software developers to create high-quality, open-source tools that cover the complexity, performance, and stability of MySQL systems, most of which are free tools.

The following 10 open source tools are a valuable asset for users of MySQL, covering a wide range of situations from a single instance to a multi-node environment. This inventory compares attentively, where you can find enough tools to help you back up MySQL data, improve performance, prevent benchmark deviations, and filter critical data from records when problems arise.

There are several advantages to using such tools than creating in-house tools yourself. First, because of their wide range of use, they are better at system maturity and functional practice. Second, because they are free open source tools, they are able to get the knowledge and experience of the expanding MySQL community. Further, these developers are rigorous in the research and development process, and many tools also have professional technical support (whether free or commercial), so they can continue to be perfected to maintain the adaptability to the changing new MySQL industry posture.

Keep in mind that there are a lot of practical tools that we haven't noticed that deserve attention. I have focused more on free and open source features in the choice of recommended tools, and functional and usability standards are the lesser of the criteria. It is also important to emphasize that, in addition to one of these tools, all the rest belongs to the UNIX command line program, because overall MySQL deployment and development work in UNIX systems is more common. If the readers in my recommendation did not find their favorite tool, I would like to comment in the Comments column below the article to share your experience.

Gossip, ten essential MySQL tools recommended to start.

MySQL essential tool First bit: Mk-query-digest

Nothing is more maddening than the poor performance of MySQL. While it is often subconsciously assumed that a lag in hardware configuration is causing such problems, the truth is that in most cases the real crux is not here. Poor performance is often caused by some slow-performing queries blocking the flow of other query instructions, resulting in a vicious cycle of slow response times. Because optimizing query instructions can save a lot of cost compared to upgrading hardware, the logical optimization should start with analyzing the query instruction log file.

Database administrators should constantly analyze the query log to grasp the various fluctuations in the operating environment. And if you have never done this analysis, please proceed immediately. If you are inexperienced with this, it is also a good idea to rely on third-party software, although many people think that the software will only give a fictitious and beautiful result after a lot of work, but I have to say, in fact, they are usually exactly effective.

Among the many current choices, Mk-query-digest is the best one in the query log Analysis tool. It was co-authored by Baron Schwartz and myself, with functional maturity, record adequacy, and test thoroughness done quite well. MySQL itself contains a query Log parser called Mysqldumpslow, but the tool is not only obsolete, the validation specification is inaccurate, and it lacks a wide range of practical applications to support it. Some of the other more famous query log analyzers, including the Mysqlsla I wrote in previous years, have the same drawbacks as Mysqldumpslow.

Mk-query-digest is able to analyze the query log content and automatically generate reports based on aggregated execution time and other metrics statistics. Because the amount of information in the query log is extremely large and sometimes contains millions of entries, such analysis must be done on a specific tool.

Mk-query-digest can help you find the items that take the longest time compared to other query commands. Optimizing these low-speed queries will make the entire MySQL system run faster, and the maximum response latency will decrease accordingly. The optimization of the query instruction itself is art, which contains a lot of nuanced skills, but the basic principles of the process is always common: the search for low-speed query instructions, optimization, improve query response time.

The tool is very easy to use, performing mk-query-digest slow-query.log, and those slow-running query instructions will be output to the Slow-query.log file. The tool also provides a "Query command review" function, which is intended to list the query instructions that we have not checked or approved. As a result, we can only deal with the newly-emerged query commands, and tedious log analysis work becomes faster and more efficient.

: Http://maatkit.org/get/mk-query-digest

Maintenance Manager: Daniel Nichter and Baron Schwartz

More Information: http://maatkit.org/| http://code.google.com/p/maatkit/

MySQL Essentials second place: Mydumper

2nd: MySQL Essentials second place: Mydumper

The ability to quickly generate data dumps is critical in the server and backup information cloning effort. Unfortunately, MySQL itself contains mysqldump components that only support single-threaded work, which makes it impossible to quickly address the real-world problems faced by data-intensive users. But the good news is still there, mydumper as a new generation of practical tools, able to support multi-threaded work, which makes it in processing speed 10 times times more than the traditional mysqldump.

Another well-known and similar tool is MySQL Data dumper, which has the problem of not being able to manage backup collections, difference points, or other components of a complete backup plan separately. The tool simply dumps the data in MySQL as fast as possible, which is valuable in completing a time-limited task, such as taking advantage of an employee's absence of an online session to make a quick backup. In addition, MySQL Data dumper is an ideal choice if you need to perform backups frequently in real-world use.

From a technical point of view, one of the features of Mydumper is that the list needs to be locked during the process, so if we need to perform the backup work during the working hours, it's probably useless. But then again, professional-grade data recovery costs $ hundreds of per hour, and even if the data doesn't get restored, we won't be able to get an apology letter and still be a bill. By contrast, Mydumper is completely free and performing well in basic backup work.

Mydumper is also more convenient for cloning a whole server. Other tools tend to replicate the contents of the hard drive as a whole, but all that is needed is data in MySQL, which can be done quickly and accurately mydumper. Servers set up on the cloud platform are ideal for cloning with mydumper by simply copying the data from the existing server to the new instance in MySQL.

Cloning has proven effective in creating subordinate servers, benchmarking, and template applications, but the areas where cloning can really work are undoubtedly part of the development and testing process. For a dynamic MySQL environment, it is a critical step to quickly replicate and test the software before it is pushed to the table. With Mydumper, you can quickly create a set of servers that are almost identical to the matrix to emulate a production server, and the test results that run on it will be closer to the actual running results.

: Https://launchpad.net/mydumper/+download

Maintenance Manager: Domas Mituzas, Andrew Hutchings, Mark Leith

More Information: http://www.mydumper.org/| https://launchpad.net/mydumper/

3rd: MySQL Essentials third place: Xtrabackup and Xtrabackup-manager

If you use your database every day, that is, all the time (and you need to run it in the evenings), the scenario of locking the list for backup will not work. In this case, Xtrabackup is our choice. This tool is also known as Percona Xtrabackup, which eliminates the need to lock down lists during backup and is the only free open source product in such tools. By contrast, those dedicated, non-locking backup software can be quite expensive and cost up to $5,000 per server.

The Xtrabackup also has an incremental backup feature that allows you to only process content that has changed relative to the last backup result in a new round of backup work. The incremental backup feature is so intimate that it works best in backup jobs where the underlying data is large but relatively small.

In addition, another tool derived from xtrabackup is maturing, which is the xtrabackup-manager for simplifying the management of a full backup plan. Although this tool has not been available for a long time and is still in the development phase, its potential capabilities cannot be overlooked. It offers the most advanced features, including cluster backup consolidation and backup collection term management. Combined, xtrabackup and Xtrabackup-manager are a powerful and free backup solution.

: http://www.percona.com/software/percona-xtrabackup/downloads/

Maintenance Person: Percona

More information:

Http://www.percona.com/docs/wiki/percona-xtrabackup:start |https://launchpad.net/percona-xtrabackup

: http://code.google.com/p/xtrabackup-manager/

Maintenance Person: Lachlan Mulcahy

More Information: http://code.google.com/p/xtrabackup-manager/| http://mysqlsoapbox.blogspot.com/

4th: MySQL Essential tool fourth bit: Tcprstat

Tcprstat is probably the most difficult item in the 10 recommended tools. The tool is used to monitor TCP requests and to count and print out low-level response times. When people are accustomed to measuring performance with response time, Tcprstat's role is considerable.

The whole set of principles is elaborated in the "Oracle Product Performance Optimization" book, jointly written by Cary Millsap and Jeff Holt, and the same principle applies to MySQL. From the basic idea, MySQL is no exception, the operation of the service project follows the receiving request (that is, the query process), satisfies the request (that is, the execution time) and the feedback response result (that is, the result set). The actual response time of a service item refers to the time span between the start of the receive request and the sending response. The response time is very short, and the number of requests that are allowed to be submitted during the same time period is more.

Parallel processing performance and other low-level factors also play an important role in this process, but we should simplify the whole process, that is, the actual running time of each eight-hour working day is calculated as 28,800 seconds. So if you can shorten the response time of each request by 400 milliseconds (that is, from the original 500 milliseconds to 100 milliseconds), it means that we can process more than 230,400 requests per day. Tcprstat is a powerful tool to help us achieve this goal.

Due to space limitations, I can only describe the functional aspects of this article (that is, the first step in explaining MySQL's response time optimization work) to arouse the interest of your readers. If you have decided to deepen your understanding after glimpse, try using Tcprstat after reading the book "Oracle Product Performance Optimization".

: (source) Https://launchpad.net/tcprstat | (binary) Http://www.percona.com/docs/wiki/tcprstat:start

Maintenance Person: Percona

More Information: Http://www.percona.com/docs/wiki/tcprstat:start | Https://launchpad.net/tcprstat

5th: MySQL essential tool fifth bit: Mk-table-checksum

"Data bias" is a major problem that exists in the dynamic MySQL environment widely. The actual meaning is: the subordinate data is not properly synchronized with the principal data, the main reason is that there is write operation on the subordinate data side or the subject data side executes the query instruction with uncertainty. To make things worse, data bias is likely to be overlooked by managers until there are serious consequences. Mk-table-checksum should be on the scene. This tool is useful for validating the consistency of related data content in two or more lists in parallel when performing complex, sensitive calculations.

Mk-table-checksum can help with servers in separate servers and synchronization architectures, which is the biggest highlight of the tool. Data consistency between the principal server and the subordinate server must be fully valued when synchronizing. Because the principal data changes in the process of synchronizing to the subordinate data there is a degree of lag (that is, delay), so the direct reading of the server data can not strictly guarantee the consistency of the information, because the data before the synchronization is completely complete, has been in a constantly changing and incomplete state. Lock list, and so on all data synchronization after the end of the validation of course, but this means that we have to abort the Server service normal response. Mk-table-checksum allows you to verify the difference between the principal and subordinate data without locking the list (for a specific implementation of this technique, please click here for the tool documentation). Http://www.maatkit.org/doc/mk-table-checksum.html

In addition to consistency during synchronization, data validation can be useful in a number of other ways, such as list size issues. MySQL's checksum table instruction is sufficient for a small list, but a large list often requires "chunking" to avoid the long-term deadlock or overloading of the CPU or memory during the checksum calculation.

The second big problem that chunking can cope with is the requirement for periodic checks of data consistency. Although the data bias may be just an accidental accident, but in fact face ugly independence administrator, this kind of problem may be repeated attack. Mk-table-checksum is designed to check the list periodically, and the entire validation process is step-by-block and step-by-point until the full scale list is processed. This persistent approach helps administrators to routinely proofread data deviations.

: Http://maatkit.org/get/mk-table-checksum

Maintenance Manager: Daniel Nichter & Baron Schwartz

More Information: http://maatkit.org/| http://code.google.com/p/maatkit/

6th: MySQL Essentials sixth place: stalk and collect

Sometimes, the problem occurs in the time period when we neglect to monitor or go home to sleep, and we all know that it is difficult or impossible to get the correct conclusion after the problem occurs to diagnose MySQL and server running state. It is often common practice to write a script in person and wait for the results to be detected, or to record the extra data, after all, no one knows what system they are using. However, the problem is that when the system is working properly, we are certainly very familiar with it, if the current working state of the system may have a variety of hidden dangers, we will often try to simply solve it rather than conduct in-depth exploration and analysis.

Fortunately, some people are very aware of the situation in MySQL crashes and have written two troubleshooting tools named Stalk and collect for frequently asked questions. The role of the previous tool is to wait for the device state to meet the failure situation before the second actually runs the instance. Although it doesn't seem to matter how thick it seems, the fact that the tool does simply and efficiently collects a variety of details that can cause problems.

First, stalk runs the collect at intervals based on the requirements of the configuration, which eliminates the cumbersome and redundant data in the records, making the analysis of previous failures more organized. Next, collect will summarize MySQL's own performance reports and other types of data that we might not have thought about, including: The folders that were opened, the system information that the application accepted and called, the amount of network traffic, and many others. As a result, if you end up having to turn to a professional consulting team that solves MySQL problems, we have all the information we need to know about them in the inquiry.

Stalk and collect can be configured as needed, so they can cope with almost any failure condition. The only requirement is to establish a definable condition for the triggering of the stalk. If there are multiple conditions that are suspected of causing a failure, you may need to consult with your own MySQL running environment specialist to deploy a broader review. In fact, the root cause of MySQL crashes can also lurk outside the system.

Stalk and collect can also be used for active defense. For example, if you understand that there should not be more than 50 active MySQL connections in the same time period, stalk can proactively monitor this issue. In other words, these two tools can help you solve many of the initial and unclear problems.

:

Http://aspersa.googlecode.com/svn/trunk/stalk |http://aspersa.googlecode.com/svn/trunk/collect

Maintenance Person: Baron Schwartz

More information: http://aspersa.googlecode.com/svn/html/index.html |http://code.google.com/p/aspersa/

7th: MySQL essential tool seventh bit: Mycheckpoint

No one wants the problem to happen before it's too busy trying to fix it, so the real-time monitoring of the MySQL operating environment through visual instrumentation is an important way to hedge against the non-combustible.

There are many MySQL-related free or commercial monitoring applications, some of which are dedicated to MySQL, and some are generic tools with MySQL plugins or templates. The reason why Mycheckpoint is worth paying attention to is that it is not only free open source, but only for MySQL, and all kinds of functions are readily available.

As with most surveillance solutions today, Mycheckpoint is based on meet-and-greet operations. Consider the example:


Mycheckpoint can be configured to monitor MySQL and server instructions simultaneously, such as InnoDB buffer pool refresh, temporary list creation, operating system load, memory usage, and so on. If you don't like reading charts, Mycheckpoint can also generate text reports.

As with stalk's functionality, alert conditions can be defined as e-mail notifications, but you do not have to run collect as a tool to collect additional troubleshooting data. Another useful feature of mycheckpoint is to detect potential problems by monitoring variables in MySQL, or to prevent changes to MySQL that should not have existed.

Monitoring MySQL is not only effective for data center or large-scale device deployments. Even if you have only one MySQL server, monitoring measures are still essential, and by this type of media, we are able to know exactly what is going on with our systems, thus effectively anticipating or circumventing possible failures.

: http://code.google.com/p/mycheckpoint/downloads/list

Maintenance Person: Shlomi Noach

8th: MySQL essential tool eighth bit: Shard-query

Still worrying about low query rates for many partitions or collections of data fragments? In fact, only using shard-query, the entire processing speed will be greatly accelerated. The following schema-based query directives can be maximized from the Shard-query tool:

Subqueries from clauses in series from

Union and UNION ALL

Inch

Between

The composite function SUM, COUNT, MIN, and MAX are also able to use the above schema. For example, the following query instruction can be executed in parallel by Shard-query:

  

SELECT DayOfWeek, COUNT (*) as C

From Ontime_fact

JOIN dim_date USING (date_id)

WHERE year

Between and 2008

GROUP by DayOfWeek

ORDER by C DESC;


According to the results of the benchmark test, the response time of the query instruction is reduced by about 85% by parallel processing, which is reduced from 21 seconds to 3 seconds.

Shard-query is not a tool to run independently, it requires support from other programs such as Gearman, and the setup process is relatively complex. But if everyone's data partitioning and query instructions conform to the structure listed above, then it is worthwhile to pay some effort, after all, the optimization effect is very obvious.

: (SVN checkout) http://code.google.com/p/shard-query/source/checkout

Maintenance Person: Justin Swanhart

More information: http://code.google.com/p/shard-query/

9th: MySQL Essential tool nineth bit: mk-archiver

As the size of the list increases, the query command takes effect more "long" each time. The response time is not ideal interference factors of course, but if we have to optimize the various angles, then the final constraint on performance is the bottleneck is the size of the list. Archiving the contents of the various rows in a large list can effectively shorten the response time of the query instruction.

Unless the contents of the list are not important, you must not delete the content line. Archiving also requires skill, because the first data can not be missing, the list can not be too locked to avoid the impact of access, but also note that the archive operation can not cause MySQL and server overload. Our goal is to make the entire archive process stable and reliable, with no negative effects other than reducing query response time. Mk-archiver can help us achieve our wishes.

Mk-archiver has two basic job requirements, the first is that the archive object must be able to be identified. For example, if there is a date column in the list, and in general only a few years of data has real value, then the data rows in the previous years can be archived. In addition, a unique set of index systems must be available to help the Mk-archiver tool navigate, without having to scan the entire list of content rows. Scanning a set of mega-lists is expensive in both time and economics, so critical indices and specific SELECT statements are critical to avoiding overall scanning.

In practical applications, Mk-archiver will automatically handle various technical details. All you need to do is tell the tool which list needs to be archived, how to identify the rows of content that can be archived, and where to put those rows. You can also cut these rows to another new list if you want, or generate a dump file in writing to allow you to import it later when you need it. Once you are familiar with the use of this tool, a number of fine tuning options can help us achieve a variety of special filing requirements. In addition, the Mk-archiver has an embedded port, so it can address many complex archival requirements without code corrections.

: Http://maatkit.org/get/mk-archiver

Maintenance Manager: Daniel Nichter and Baron Schwartz

More Information: http://maatkit.org/| http://code.google.com/p/maatkit/

10th: MySQL Essential tool Tenth bit: Oak-security-audit

The last time you fully audit your MySQL server security is when? If the answer is "never", there is no need to worry, because the group that never does security checks is quite large. Many enterprises provide security audit services, but unless there are no large-scale changes after the audit, the security of our MySQL environment should be regularly checked.

External threats are a major reason to perform MySQL security audits, but internal threats, especially from current or former employees, are often more dangerous because they are (or have) the trust and authority they currently have. Security is equally noticeable in terms of the protection of privacy information (such as medical and health insurance), and efforts must be made to prevent accidental access (such as logging on to the production server rather than the development server) or interaction between third-party programs and systems.

For users who want to improve security, Oak-security-audit is a free, open source tool that can handle basic MySQL security audits. It does not need to be set up to run on its own MySQL server, it will print out a report on the account, account permissions, passwords, general improvements, and potential risks, such as the recommendation to temporarily disable network access. Here are some of the things in the report:

Looking for anonymous user accounts (looking for anonymous users account)

Passed (no problem found)

Looking for accounts accessible from any host (looking for accounts that can be accessed from any of the hosts)

Found 1 accounts accessible from any host.

Recommended actions:

RENAME USER [email protected]% to [email protected];

(1 of these accounts were found.) Recommended Action:

Rename the username [email protected]% to [email protected];)

Oak-security-audit's work focuses on the security aspects of MySQL, so it does not replace a complete set of security audits proposed by a technician, but it acts as a first line of defense that can be a great defense and easy to operate. You can solidify it into cron instructions, run on time every week, and send the generated reports to yourself and review them in e-mail.

: http://openarkkit.googlecode.com/svn/trunk/openarkkit/src/oak/oak-security-audit.py

Maintenance Person: Shlomi Noach

More information:

Http://openarkkit.googlecode.com/svn/trunk/openarkkit/doc/html/oak-security-audit.html

Original link:

http://www.infoworld.com/d/data-management/10-essential-mysql-tools-admins-168018?page=0,0

MySQL Administrator collection: Top Ten essential tool counts

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.