MySQL tool: 10 MySQL tools required by administrator

Source: Internet
Author: User
Tags percona

MySQL is a complex system that requires many tools to repair, diagnose and optimize it. Fortunately for administrators, MySQL has attracted many software developers to launch high-quality open source tools to address the complexities, performance and stability of MySQL systems, most of which are free-to-provide communities. From single-machine to multiple-node environments, the following 10 open source tools are more useful for anyone using MySQL. The list already takes into account the various compilations. You'll find these tools that can help back up MySQL data, improve performance, prevent data skew, and record relevant troubleshooting data when a problem occurs.
MySQL Tool 1:mk-query-digest
Nothing is more frustrating than slow MySQL performance. Although it is often considered a hardware acceleration problem, it is not necessarily true. Poor performance can be attributed to slow execution of queries that block other queries, resulting in a chain reaction with slow response times. Optimizing query instructions is more cost-efficient than upgrading hardware, so the first step should be MySQL optimized query log analysis.
The database administrator should frequently analyze the query log to grasp the changes in the environment. If you have never performed a query log analysis, please do so immediately. Assuming optimization, relying on third-party software, but it is not reliable.
Currently the best query log Analysis tool is mk-query-digest. It is actively in-depth, fully documented, and thoroughly tested. MySQL distribution includes query logging analysis tool Mysqldumpslow, but the tool is outdated, poorly documented, and cannot withstand the test. Other query log analysis, like Mysqlsla, which was released a few years ago, is also the same as Mysqldumpslow's problem.
Mk-query-digest parses the query log and generates a report summarizing statistics about the execution time and other metrics. Because query logs typically contain thousands of millions of queries, Mk-query-digest is the tool needed to query log parsing.
The advantage of Mk-query-digest is that its query is shorter than other query execution times. Optimizing these slow queries will make MySQL run faster and reduce the maximum latency. The optimization of the query instruction itself is art, which contains a lot of nuanced skills, but the basic goal is the same: Find slow query, optimization, improve query response time.
This tool is relatively easy to use, performs mk-query-digest slow-query.log, and outputs sluggish query commands to Slow-query.log. The tool also includes "Query Order Review", which is intended to list the query instructions that we have not checked or approved, making frequent log analysis faster and more efficient.
:http://maatkit.org/get/mk-query-digest
MySQL Tool 2:mydumper
The ability to generate fast backups and server clones of data dumps is critical. Unfortunately, the mysqldump component that MySQL publishes is single-threaded, which makes it impossible to quickly solve the actual problems faced by data-intensive users. Thankfully, the replacement of the Mydumper now, using multi-threading, makes it 10 times times faster than the mysqldump.
Another tool is MySQL Data dumper, a tool that cannot manage backup sets, differentiation, or other parts of a full backup plan. Compared to mysqldump, it simply dumps data from MySQL as quickly as possible, allowing you to complete backups when time is tight, such as overnight, when employees are offline, or when performing backups more frequently.
From a technical point of view, Mydumper is a locking table, so it is not an ideal tool for performing backups during business hours. Then again, professional data recovery costs $ hundreds of per hour, and even if the data is unrecoverable, you will still receive a bill. And Mydumper is free, so this is worth considering, including basic backups.
Mydumper when cloning a server is still relatively convenient. Other tools repeat the overall copy of the hard drive content, but when you need MySQL data, Mydumper is the quickest way to do it. The servers set up in the cloud are particularly suitable for cloning with Mydumper. Dump your MySQL data from an existing server and copy it to a new instance.
Cloning creates subordinate servers, benchmarks, and analytics that are effective in testing and developing everywhere without showing it important. For a dynamic MySQL environment, it is essential to be able to speed up the rapid testing of replicas before you go online. And Mydumper, you can quickly create a server, and the server that is working is almost the same, so that your test results can better mimic the results of the job.
:Https://launchpad.net/mydumper/+download
MySQL Tools 3:xtrabackup and Xtrabackup-manager
Xtrabackup is your best solution if your database is used around the clock and there is no "full night" backup during backup lock-up tables. Xtrabackup also known as Percona
Xtrabackup, this tool is the only free implementation of non-blocking backups, open source tools can do this, very rare. By contrast, proprietary non-blocking backup software costs more than $5,000 per server.
Xtrabackup also provides incremental backups that allow you to back up only data that has changed since the last full backup. Adding incremental backups to your backup process is powerful, and these huge smaller backups will reduce the performance of instantaneous interference.
In addition, Xtrabackup's other product has been invented, making it easier to manage a complete backup plan, which is xtrabackup-manager. While this is a new tool and still in the development phase, it has great potential because it provides more advanced functionality as the set of rotated backups and backup sets terminates. Xtrabackup and Xtrabackup-manager are a powerful and free backup solution.
:http://www.percona.com/software/percona-xtrabackup/downloads/
MySQL Tool 4:tcprstat
Tcprstat is probably the most difficult of 10 tools to understand. The tool monitors TCP requests, and prints statistical low-level response times. When you are familiar with response time to the performance of the way of thinking, the role of Tcprstat is very large.
Explains the principles of Oracle Performance optimization, written in Cary Millsap and Jeff Holt, which also applies to MySQL. The basic idea is that in the case of MySQL, this is a service that accepts a request (query), satisfies the request (execution time), and responds to the result (result set). The response time of the service is the time span between receiving the request and sending the response. The shorter the response time, the more requests can be delivered within the same time.
Parallel processing and other low-level factors play an important part, but the simplified result is that the actual run time per eight-hour working day is calculated at 28,800 seconds, so if the response time of each request is reduced by 400 milliseconds (from 0.5 seconds to 0.1 seconds) on the original basis, That means we can handle 230,400 more requests per day. Tcprstat will help you achieve this goal.
Due to space limitations, this article can only be described in a few functional aspects (that is, the first step to explain the MySQL response time optimization work) to arouse your curiosity. Complete the introduction of this tool: read "Oracle performance optimization" and start using Tcprstat.
:Https://launchpad.net/tcprstat
Binary :Http://www.percona.com/docs/wiki/tcprstat:start
MySQL Tool 5:mk-table-checksum
"Data bias" is a major problem that exists in the dynamic MySQL environment widely. The actual meaning is: the subordinate data is not properly synchronized with the principal data, the main reason is that there is write operation on the subordinate data side or the subject data side executes the query instruction with uncertainty. To make things worse, data bias is likely to be overlooked by managers until there are serious consequences. Mk-table-checksum should be on the scene. This tool is useful for validating the consistency of related data content in two or more lists in parallel when performing complex, sensitive calculations.
Mk-table-checksum can help with servers in separate servers and synchronization architectures, which is the biggest highlight of the tool. Data consistency between the principal server and the subordinate server must be fully valued when synchronizing. Because the principal data changes in the process of synchronizing to the subordinate data there is a degree of lag (that is, delay), so the direct reading of the server data can not strictly guarantee the consistency of the information, because the data before the synchronization is completely complete, has been in a constantly changing and incomplete state. Lock list, and so on all data synchronization after the end of the validation of course, but this means that we have to abort the Server service normal response. Mk-table-checksum allows you to verify the difference between the principal and subordinate data without locking the list (for a specific implementation of this technique, please click here for the tool documentation).http://www.maatkit.org/doc/mk-table-checksum.html
In addition to consistency during synchronization, data validation can be useful in a number of other ways, such as list size issues. MySQL's checksum table instruction is sufficient for a small list, but a large list often requires "chunking" to avoid the long-term deadlock or overloading of the CPU or memory during the checksum calculation.
The second big problem that chunking can cope with is the requirement for periodic checks of data consistency. Although the data bias may be just an accidental accident, but in fact face ugly independence administrator, this kind of problem may be repeated attack. Mk-table-checksum is designed to check the list periodically, and the entire validation process is step-by-block and step-by-point until the full scale list is processed. This persistent approach helps administrators to routinely proofread data deviations.
Http://maatkit.org/get/mk-table-checksum
MySQL Tools 6:stalk and collect
Sometimes, the problem occurs in the time period when we neglect to monitor or go home to sleep, and we all know that it is difficult or impossible to get the correct conclusion after the problem occurs to diagnose MySQL and server running state. It is often common practice to write a script in person and wait for the results to be detected, or to record the extra data, after all, no one knows what system they are using. However, the problem is that when the system is working properly, we are certainly very familiar with it, if the current working state of the system may have a variety of hidden dangers, we will often try to simply solve it rather than conduct in-depth exploration and analysis.
Fortunately, some people are very aware of the situation in MySQL crashes and have written two troubleshooting tools named Stalk and collect for frequently asked questions. The role of the previous tool is to wait for the device state to meet the failure situation before the second actually runs the instance. Although it doesn't seem to matter how thick it seems, the fact that the tool does simply and efficiently collects a variety of details that can cause problems.
First, stalk runs the collect at intervals based on the requirements of the configuration, which eliminates the cumbersome and redundant data in the records, making the analysis of previous failures more organized. Next, collect will summarize MySQL's own performance reports and other types of data that we might not have thought about, including: The folders that were opened, the system information that the application accepted and called, the amount of network traffic, and many others. As a result, if you end up having to turn to a professional consulting team that solves MySQL problems, we have all the information we need to know about them in the inquiry.
Stalk and collect can be configured as needed, so they can cope with almost any failure condition. The only requirement is to establish a definable condition for the triggering of the stalk. If there are multiple conditions that are suspected of causing a failure, you may need to consult with your own MySQL running environment specialist to deploy a broader review. In fact, the root cause of MySQL crashes can also lurk outside the system.
Stalk and collect can also be used for active defense. For example, if you understand that there should not be more than 50 active MySQL connections in the same time period, stalk can proactively monitor this issue. In other words, these two tools can help you solve many of the initial and unclear problems.
:Http://aspersa.googlecode.com/svn/trunk/stalkHttp://aspersa.googlecode.com/svn/trunk/collect
MySQL Tool 7:mycheckpoint
No one wants the problem to happen before it's too busy trying to fix it, so the real-time monitoring of the MySQL operating environment through visual instrumentation is an important way to hedge against the non-combustible.
There are many MySQL-related free or commercial monitoring applications, some of which are dedicated to MySQL, and some are generic tools with MySQL plugins or templates. The reason why Mycheckpoint is worth paying attention to is that it is not only free open source, but only for MySQL, and all kinds of functions are readily available.
As with most surveillance solutions today, Mycheckpoint is based on meet-and-greet operations. Consider the example:

Mycheckpoint can be configured to monitor MySQL and server instructions simultaneously, such as InnoDB buffer pool refresh, temporary list creation, operating system load, memory usage, and so on. If you don't like reading charts, Mycheckpoint can also generate text reports.
As with stalk's functionality, alert conditions can be defined as e-mail notifications, but you do not have to run collect as a tool to collect additional troubleshooting data. Another useful feature of mycheckpoint is to detect potential problems by monitoring variables in MySQL, or to prevent changes to MySQL that should not have existed.
Monitoring MySQL is not only effective for data center or large-scale device deployments. Even if you have only one MySQL server, monitoring measures are still essential, and by this type of media, we are able to know exactly what is going on with our systems, thus effectively anticipating or circumventing possible failures.
http://code.google.com/p/mycheckpoint/downloads/list
MySQL Tool 8:shard-query
Still worrying about low query rates for many partitions or collections of data fragments? In fact, only using shard-query, the entire processing speed will be greatly accelerated. The following schema-based query directives can be maximized from the Shard-query tool:
Subqueries from clauses in series from
Union and UNION ALL
Inch
Between
The composite function SUM, COUNT, MIN, and MAX are also able to use the above schema. For example, the following query instruction can be executed in parallel by Shard-query:

SELECT DayOfWeek, COUNT (*) as C
From Ontime_fact
JOIN dim_date USING (date_id)
WHERE year
Between and 2008
GROUP by DayOfWeek
ORDER by C DESC;

According to the results of the benchmark test, the response time of the query instruction is reduced by about 85% by parallel processing, which is reduced from 21 seconds to 3 seconds.
Shard-query is not a tool to run independently, it requires support from other programs such as Gearman, and the setup process is relatively complex. But if everyone's data partitioning and query instructions conform to the structure listed above, then it is worthwhile to pay some effort, after all, the optimization effect is very obvious.
Http://code.google.com/p/shard-query/source/checkout
MySQL Tool 9:mk-archiver
As the size of the list increases, the query command takes effect more "long" each time. The response time is not ideal interference factors of course, but if we have to optimize the various angles, then the final constraint on performance is the bottleneck is the size of the list. Archiving the contents of the various rows in a large list can effectively shorten the response time of the query instruction.
Unless the contents of the list are not important, you must not delete the content line. Archiving also requires skill, because the first data can not be missing, the list can not be too locked to avoid the impact of access, but also note that the archive operation can not cause MySQL and server overload. Our goal is to make the entire archive process stable and reliable, with no negative effects other than reducing query response time. Mk-archiver can help us achieve our wishes.
Mk-archiver has two basic job requirements, the first is that the archive object must be able to be identified. For example, if there is a date column in the list, and in general only a few years of data has real value, then the data rows in the previous years can be archived. In addition, a unique set of index systems must be available to help the Mk-archiver tool navigate, without having to scan the entire list of content rows. Scanning a set of mega-lists is expensive in both time and economics, so critical indices and specific SELECT statements are critical to avoiding overall scanning.
In practical applications, Mk-archiver will automatically handle various technical details. All you need to do is tell the tool which list needs to be archived, how to identify the rows of content that can be archived, and where to put those rows. You can also cut these rows to another new list if you want, or generate a dump file in writing to allow you to import it later when you need it. Once you are familiar with the use of this tool, a number of fine tuning options can help us achieve a variety of special filing requirements. In addition, the Mk-archiver has an embedded port, so it can address many complex archival requirements without code corrections.
Http://maatkit.org/get/mk-archiver
MySQL Tool 10:oak-security-audit
The last time you fully audit your MySQL server security is when? If the answer is "never", there is no need to worry, because the group that never does security checks is quite large. Many enterprises provide security audit services, but unless there are no large-scale changes after the audit, the security of our MySQL environment should be regularly checked.
External threats are a major reason to perform MySQL security audits, but internal threats, especially from current or former employees, are often more dangerous because they are (or have) the trust and authority they currently have. Security is equally noticeable in terms of the protection of privacy information (such as medical and health insurance), and efforts must be made to prevent accidental access (such as logging on to the production server rather than the development server) or interaction between third-party programs and systems.
For users who want to improve security, Oak-security-audit is a free, open source tool that can handle basic MySQL security audits. It does not need to be set up to run on its own MySQL server, it will print out a report on the account, account permissions, passwords, general improvements, and potential risks, such as the recommendation to temporarily disable network access.
Oak-security-audit's work focuses on the security aspects of MySQL, so it does not replace a complete set of security audits proposed by a technician, but it acts as a first line of defense that can be a great defense and easy to operate. You can solidify it into cron instructions, run on time every week, and send the generated reports to yourself and review them in e-mail.
http://openarkkit.googlecode.com/svn/trunk/openarkkit/src/oak/oak-security-audit.py

MySQL tool: 10 MySQL tools required by administrator

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.