PHP Performance Analysis Third: Performance tuning combat

Source: Internet
Author: User
Tags autoload cassandra gettext mysql query mysql slow query log w3 total cache zend framework
Note: This article is the third of our series of PHP performance analysis, click here to read PHP performance analysis first: Xhprof & Xhgui Introduction, or PHP performance analysis second: In-depth study of Xhgui.

In the first chapter of this series, we introduced Xhprof. In the second chapter, we studied the Xhgui UI, and now the last one, let's use Xhprof/xhgui's knowledge to work!

Performance tuning

Code that doesn't run is the perfect code. The other is just good code. Therefore, the best option when performance tuning is to ensure that you run as little code as possible first.

OpCode Cache

First, the quickest and simplest option is to enable the OpCode cache. More information on the OpCode cache can be found here.

In, we see what happens when Zend Opcache is enabled. The last line is our benchmark, which is where caching is not enabled.

In the middle row, we see a smaller performance boost, and a significant reduction in memory usage. Small performance gains (most likely) come from Zend Opcache optimizations, not OpCode caches.

The first line is optimized and OpCode cache results, and we see a lot of performance gains.

Now, let's look at the changes before and after the APC. As shown, as the cache builds, we see the performance degradation of the initial (intermediate row) request compared to the Zend Opcache, with a significant decrease in consumption time and memory usage.

Then, with the establishment of the opcode cache, we see similar performance gains.

Content Caching

The second thing we can do is cache content?? This is a piece of cake for WordPress. It provides a number of easy-to-install plugins for content caching, including WP Super cache. WP Super Cache creates a static version of the site. This version automatically expires when a comment event occurs, according to the site settings. (for example, in very high load situations, you might want to disable cache expiration for any reason).

The content cache can only run effectively when there is almost no write operation, and the write operation invalidates the cache while the read operation does not.

You should also cache the content that your app receives from third-party APIs, reducing latency and dependency due to API availability.

WordPress has two cache plugins that can greatly improve the performance of the website: W3 Total cache and WP Super cache.

Both plug-ins create static HTML copies of the site, rather than generating pages each time a request is received, compressing the response time.

If you are developing your own application, most frameworks have a cache module:

    • Zend Framework 2:zend\cache

    • Symfony 2:multiple Options

    • Laravel 4:laravel Cache

    • thinkphp 3.2.3:thinkphp Cache

Query cache

Another cache option is the query cache. For MySQL, there is a common query cache that helps tremendously. For other databases, it is also very effective to cache query result sets in memory caches such as Memcached or Cassandra.

As with the content cache, the query cache is most effective in scenarios that contain a large number of read operations. Because a small amount of data changes will invalidate large chunks of cache, in particular, you cannot rely on MySQL query caching in this case to improve performance.

The query cache may improve performance when generating content caches.

As shown, when we open the query cache, the actual run time is reduced by 40%, although memory usage does not change significantly.

There are three types of cache options available, set by Query_cache_type control.

    • Setting a value of 0 or off disables caching

    • Setting a value of 1 or on caches all selections except the beginning of the Select sql_no_cache

    • Setting a value of 2 or DEMAND only caches selections beginning with select Sql_cache

In addition, you should set query_cache_size to a value other than 0. Setting it to zero disables the cache regardless of whether the Query_cache_type is set.

For help with setting up the cache, see the Mysql-tuning-primer script for a number of other performance-related settings.

The main problem with MySQL query caching is that it is global. Any changes to the table that comprise the cached result set will cause the cache to become stale. In applications where write operations are frequent, this makes the cache almost invalid.

However, you have a number of other options to build more intelligent caches based on your needs and datasets, such as Memcached, Riak, Cassandra, or Redis

Query optimization

As mentioned earlier, database queries are often the cause of slow program execution, and query optimization can often bring more immediate benefits than code optimization.

Query optimization helps to improve performance when generating content caches, and it can also be beneficial in the worst case where it is not possible to cache.

In addition to the analysis, MySQL also has a help to identify the choice of slow query?? Slow query log. The slow query log records all queries that take longer than the specified time, and queries that do not use the index (the latter is optional).

You can use the following configuration to enable logging in my.cnf .

[Mysqld]log_slow_queries =/var/log/mysql/mysql-slow.log Long_query_time =1log-queries-not-using-indexes

If any query is slower than long_query_time (in seconds), the query is logged to the log file log_slow_queries . The default value is 10 seconds, minimum 1 seconds.

In addition, the log-queries-not-using-indexes option captures any query that does not use an index into the log.

After that we can check the log with the mysqldumpslow command bundled with MySQL.

Using these options during WordPress installation, the home page loads and runs to get the following data:

$ mysqldumpslow-g "wp_"/var/log/mysql/mysql-slow.logreading mysql slow query log from/var/log/mysql/ mysql-slow.logcount:1  time=0.00s (0s) lock=0.00s (0s) rows=358.0 (358), User[user]@[host] SELECT option\_name,  Option\_value from wp_options WHERE autoload = ' S ' count:1 time=0.00s (0s) lock=0.00s (0s) rows=41.0 (in a), User[user]@[host] SELECT user\_id, Meta_key, meta_value from Wp_usermeta WHERE user_id in (N)

First, note that all string values are represented in S , and numbers are represented by N . You can add the-a flag to display these values.

Next, note that these two queries are time-consuming 0.00 s, which means they take less than 1 second thresholds and do not use an index.

Using EXPLAIN in the MySQL console, you can check the cause of performance degradation:

    Mysql> EXPLAIN SELECT option_name, option_value from wp_options WHERE autoload = ' S ' \g    ************************** * 1. Row ***************************               id:1      select_type:simple            table:wp_options             type:all    possible_ Keys:null              key:null          key_len:null              ref:null             rows:433            extra:using where

Here, we see that Possible_keys is NULL, which confirms that the index is not being used.

EXPLAIN is a very powerful tool for optimizing MySQL queries, and more information can be found here.

PostgreSQL also includes a EXPLAIN (the EXPLAIN is quite different from MySQL), while MongoDB has a $explain meta operator.

Code optimization

It is usually only when you are no longer constrained by PHP itself (by using the OpCode cache) that you cache as much content as possible, and after you have optimized the query, you can begin to adjust the code.

Code and query optimizations provide enough performance gains to create additional caches, and the higher the Code's performance in the worst-case scenario (without caching), the more stable the application, and the faster it will be to rebuild the cache.

Let's see how to (potentially) optimize our WordPress installation.

First, let's look at the slowest function:

To my surprise, the first item in the list is not MySQL (in fact mysql_query () is the fourth), but the apply_filter () function.

WordPress code library is characterized by the implementation of a variety of data transformation through the event-based filtering system, the order of execution according to the data through the kernel, plug-ins add or callback sequence.

The apply_filter () function is where these callbacks are applied.

First, you might notice that the function was called 4,194 times. If we click to see more details, we can arrange the "parent function" in descending order of "number of calls", and discover that translate () called the __apply_filter () __ function 778 times.

This is interesting because I don't actually use any translations. I (and suspect most users) are set to native language when using WordPress software: English.

So let's click on the details to see what the translate () function is doing.

Here, we see two interesting things. First, in the parent function, one was called 773 times: __ ().

After looking at the source code of the function, we find that it is a wrapper for the translate () .

    
 
  

According to the rule of thumb, function calls are expensive and should be avoided as much as possible. Now we always call __ () instead of translate () , we should change the alias to translate () to maintain backward compatibility, and __ () no longer calls the nonessential function.

However, in fact, this change will not bring much difference, but micro-optimization?? But it does improve the readability of the code and simplifies the call graph.

Go ahead and let's look at the sub-function:

Now, deep into the function, we see that there are 3 functions or methods that are called, each 778 times:

    • Get_translations_for_domain ()

    • Noop_translations::translate ()

    • Apply_filters ()

In descending order of inclusive actual run times, we see that apply_filter () is the most time-consuming call so far.

View Code:

    
 
  Translate ($text), $text, $domain);    }    ? >

The purpose of this code is to retrieve a translation object and then pass the result of the $translations->translate () to apply_filter () . We find that $translations is an example of the Noop_translations class.

Based only on the name (__noop__), and then confirmed by the comments in the code, we find that the translator does not actually have any action!

    
 
  

So maybe we can totally avoid this kind of code!

By doing small-scale debugging on the code, we see that the default domain is currently being used, and we can modify the code to ignore the translator:

    
 
  Translate ($text), $text, $domain);    }    ? >

Next, we analyze again to make sure to run at least two times ?? Make sure all caches are set up to be a fair contrast!

This run is really faster! But, how fast? Why?

Use the Xhgui comparison to run this feature to find the answer. Back to our initial run, click on the "Compare here" button in the top right corner and select a new run from the list.

We found that the number of function calls was reduced by 3%, the inclusive actual run time was reduced by 9%, and the inclusive CPU time was reduced by 12%!

After that, the detail page can be sorted in descending order of calls, which confirms (as we expect) Get_translations_for_domain ( ) and noop_translations::translate () The number of calls to the function is reduced. Similarly, it is possible to confirm that no unexpected changes have occurred.

30 minutes of work brings 9-12% performance improvements, which is very welcome. This means real-world performance gains, even after Opcache has been applied.

Now we can repeat the process for its function until more optimization points are found.

Note: This change has been submitted to wordpress.org and has been updated. You can follow the discussion in WordPress Bug Tracker to see the practice process. This update plan is included in the WordPress 4.1 release.

Other tools

In addition to the excellent Xhprof/xhgui, there are some good tools.

New Relic & OneAPM

Both New Relic and OneAPM provide front-to-back performance analysis and insight into background stack messages, including SQL queries and code analysis, front-end DOM and CSS rendering, and Javascript statements. __oneapm__ more functions please visit (OneAPM online demo)

Uprofiler

Uprofiler is an unpublished Facebook Xhprof branch that plans to remove the CLA required by Facebook. At present, the two have the same characteristics, only some of the parts have been renamed.

Xhprof.io

Xhprof.io is another user interface for Xhprof. Xhprof.io uses MySQL in configuration file storage and is less user-friendly than Xhgui.

Xdebug

Before Xhprof appeared, Xdebug already existed?? Xdebug is an active Performance analyzer, which means that it should not be used in a production environment, but can gain insight into the code.

However, it must be used in conjunction with another tool to read the output of the parser, such as Kcachegrind. But Kcachegrind is hard to install on non-Linux machines. Another option is webgrind.

Webgrind cannot provide those features of Kcachegrind, but it is a PHP Web application that is easy to install in any environment.

If paired with Kcachegrind, you can easily explore and discover performance issues. (In fact, this is my favorite profiling tool!) )

Conclusion

Analysis and performance tuning are very complex projects. With the right tools and the understanding of how to use these tools, we can improve the quality of the code to a great extent?? Even for code libraries that are unfamiliar to us.

It is absolutely worthwhile to take the time to explore and learn these tools.

Note: This is the third of our series of PHP performance analysis, read PHP performance analysis first: Xhprof & Xhgui Introduction, and PHP performance analysis second: In-depth study of Xhgui. ( This article is the application performance management leader Enterprise OneAPM engineer compiles and organizes )

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.