Profiling is a technique used to observe the performance of a program, and is ideal for discovering bottlenecks or straining resources. Profiling can penetrate the inside of the program, show the performance of each part of the request process, and identify the problematic request (request), and for problematic requests, we can also determine where the performance problem occurs within the request. For PHP, we have a variety of profiling tools, this article is mainly focused on--xhgui, a very good tool. Xhgui is built on top of the Xhprof (Xhprof posted by Facebook), but it adds better storage to the profiling results while adding more good information acquisition interfaces. In this respect, Xhgui is more like a new tool.
Xhgui has experienced several versions of iterations, but the current version provides a more attractive user interface and uses MongoDB to store its profiling results. All of this is a huge improvement over the previous version because the previous version was more like a developer's design, using files to save data, making the data collected very difficult to use. Xhgui 2013 is a very comprehensive profiling tool for both managers and developers, while Xhgui 2013 is designed to be lightweight enough to work in a production environment.
This article will step through the installation of the program and show you all the information you can gather with the tool.
The first step: installation dependency
Because Xhgui has some dependencies, the first step is to solve the problem. All the tutorials underneath are based on the Ubuntu 13.04 platform, and of course you should be able to adapt them and apply them to your own platform. For now, we need to install MongoDB, PHP, and have some installation PECL expansion capabilities.
First, we have to install MongoDB, there are some official installation tutorials, you can find the details related to your system, but now I will install by simple apt:
The MongoDB version obtained in this way may not be up to date because the product is updated very quickly. However, if you want it to keep a very new version, you can add the library provided by MongoDB to your package manager so you can get the latest one.
At the same time, we also need the MONGO driver for PHP. The driver version is a bit old in the warehouse, and for today's demo, we'll get it from pecl. If you do not have a pecl command on your machine, you can install it by using the following command:
Aptitude Install Php-pear
We then add the MongoDB driver to PHP with the following command:
In order to complete the installation, we finally need to add a new line to the php.ini file. However, the new version of Ubuntu provides a new system for configuring PHP extensions, more like the Apache module installation--keeping all configurations in one place and then creating a symbolic link to start the configuration. First, we create a file to save the settings, although in this example you only need to add one row to the setting to start the extension. We'll save it in file/etc/php5/mods-available/mongo.ini, add the following line:
Use pecl again to install the Xhprof extender. The program is currently only a beta version, so the installation commands are as follows:
The command line once again prompts us to add a new row in php.ini. We use the same method as above, create the file/etc/php5/mods-available/xhprof.ini, and add the following inside such as:
At this point, we can check to see if these modules are properly installed-by running the php-m command at the command line. Remember, do not forget to restart Apache so that the Web interface can enable these extensions.
Install Xhgui
The Xhgui itself consists primarily of web pages, which provide a more user-friendly interface for the data collected by the xhprof extension. You can github repo clones from the code base, or you can download the zip file directly, and then unzip it. After you get the program, determine that the cached directory has sufficient permissions for the Web server to write to the file. Finally, run the setup script:
This is all that is required for the program installation and will automatically install some dependencies, and if an exception occurs, the installer will give you a hint.
I prefer to install the Xhgui in a virtual host, which requires a. htaccess file to be allowed and a rul rewrite to start. The start URL rewrite indicates that the Mod_rewrite module needs to be started by using the following command:
(Don't forget to restart Apache). If all goes well, you can normally access the Xhgui URL and you can see the following:
Start Xhgui in a virtual host
At this point, we want to start Xhgui to verify the performance of our site. Note that performance testing is best performed once before any optimizations are made to detect the effect of the optimization. The easiest way to do this is to add auto_prepend_file declarations to the virtual host, as shown in the following illustration:
<virtualhost *:80>
ServerName example.local
documentroot/var/www/example/htdocs/
php_admin_ Value auto_prepend_file/var/www/xhgui/external/header.php
<Directory/var/www/example/htdocs/>
Options followsymlinks Indexes
allowoverride all
</Directory>
</VirtualHost>
After everything is in place, you can start profiling the site's request. Xhgui will only dissect 1% of Web site requests, so in order for Xhgui to get meaningful data, you need to let Xhgui run for a while or use a test tool like Apache Bench to submit a batch of requests. Why would Xhgui only dissect one of the 100 requests? Because Xhgui is designed to be lightweight enough to work in a production environment, it does not want to incur additional overhead on each request, and a 1% sampling rate has been able to provide a clearer overview of the overall flow of the site.
Meet data
I use the test virtual machine to run all the examples in this article, using the Joind.in API as the test code. To generate some traffic, I ran the API test case several times. You can also collect data under a certain load, so you can use Xhgui in stress tests, and you can even use Xhgui to collect data on the online site (sounds crazy, but Facebook has developed the tool for this application). After a certain request has been sent to the application, the Xhgui is accessed again, and now it has saved some data:
The graph shows us each request that Xhgui has analyzed for us, the latest requests are ranked first, and some additional information is displayed for each request. This information includes:
- URL: The URL to which the request was accessed
- Time: Request Initiation times
- Wtor: "Wall Time" – all the Times that the request went through. This is the abbreviation for "Wall Clock" time, which means that the user waits for the request to complete all the Times
- CPU: CPU time spent on this request
- MU: Memory consumed by this request
- PMU: The maximum amount of memory consumed during request processing
To get more detailed information on each request ("Run"), you can click on each of the columns that you are interested in. You can click on the URL to get the details of all requests for that URL. Either way, you can get more detailed information about the request:
This is a very long and very detailed page, so I quote two screenshots (5 screenshots If all the information will be displayed). The left part of the above illustration shows some information about the request to help you keep track of what the statistics are about; The main section on the right shows the most time-consuming parts and the memory consumed by each function call during the request. There is a primary key below the diagram to indicate each column.
The second picture shows more detailed information about each component of the request. We can see the number and time consumption of each part of the call, as well as CPU and memory information. Both inclusive and exclusive information are presented in detail: exclusive represents only the consumption generated by the method invocation; inclusive includes not only the consumption generated by this function, but also the consumption generated by other functions called by this function.
Xhgui Another feature is the call graph (callgraph), which shows a vivid virtual way of how time is consumed:
This is a good demonstration of the hierarchy of function calls. Best of all, the diagram is interactive, you can drag and drop to better view the connection; You can also use the mouse to slide over the "ring" (BLOB) to see more information. When you interact with it, it's fun to bounce back and move, it's not a very important feature but it makes me feel fun.
Understanding Data
It's important to have a lot of statistics, but it's hard to know where to start. For a page with less performance than expected, take the following steps: First, sort the exclusive CPU time for each function to see a list of the most time-consuming functions. Analyze these time-consuming function calls and refactor and optimize them.
Once the changes have been made, the profiler examines the new version of the program again, testing performance improvements. Xhgui built the perfect tool to compare two runs; Click the "Compare this Run" button in the upper-right corner of the details page. This button will show you the results of each test for the URL, and select the object you want to compare. For the object you want to compare, click the "Compare" button, and Xhgui will move to the diff view, as shown in the following figure:
The statistics table shows the main differences between the new and previous versions of the statistics, including the actual numbers and percentages of each information change. The figure above shows that the new version of the request wait time is only 8% of the old version. The statistics table details the changes in each statistic that we can see on the Details page, and you can sort any column to find the information you are interested in.
Once you have successfully refactored on one side, view the Details page (Detail page) to check the actual effect of the new version, and then select other areas for optimization. Attempts to sort memory usage or exclusive wall time to select functions that maximize the overall performance of the application to optimize. At the same time, do not forget to check the number of calls, a repeated call function after optimization can be multiplied to improve the performance of the program.
The Optimization method
It's hard to know how much you've improved before you quantify the results, which is why we often detect an application before it's optimized-how else do you know if you really tuned it? We also need to think about how a set of real data should be expressed, otherwise we might move towards an impossible goal. A useful way to do this is to try to find the most appropriate data structure and the minimum storage space you need to use. If you can't run a "Hello World" program within half a second in a working environment you're good at, don't expect a Web page built with the same tools to perform well.
The above description is not disrespectful to the programming framework (framework); The programming framework exists because it is easy to use, supports rapid development, and is easy to maintain. The reduction in performance of the programming framework is the result of a compromise in our overall context compared to the hand-written code in person. Using the programming framework for application development is a good way to get online as quickly as possible, and you can use the profiling tool to analyze and improve the performance of the program when needed. For example, many of the modules of the Zend Framework 1 can provide an awesome feature, but they can be very low, with profiling tools to identify and replace poorly performing parts. All other frameworks have similar problems, Xhgui can show you where the problem is and check if they have a measurable impact on your program.
Outside of your program, some of the other strategies may be useful sooner or later:
- Beware of non-dangerous slow-speed association functions (not-dangerously-slow-but-related functions) appearing on a page. If your page spends 50% of its time on a series of functions in the view helper that the formatting essentials handles (I promise this to be an imaginary example), then you might want to study refactoring the entire component.
- Do less. Try to remove the attributes if the performance is more important than they are.
- Beware of content that is generated in one request but not used in a particular view, or that has not changed and is regenerated multiple times.
- A good caching strategy. This will be another article about it, but consider using a OpCode cache in PHP (built from PHP 5.5), adding a reverse proxy in front of your Web server, simply sending the appropriate cache headers for content that doesn't change very often.
- Violence to be coupled. If there is a special feature of a scary resource strain, remove it from your Web server. Maybe it can be handled asynchronously, so your program can simply add a message to the queue, or move to another individual server and access it as a separate service model. Either way, decoupling will help reduce the load on your Web server while enabling a valid extension.
Xhgui is your friend.
Xhgui installation is simple, the use of the shadow, the great output so that you can get the board meeting to show. It identifies errors in our application and helps us to confirm that the application really works (or not!). )。 This may go through some repetitive processes, but whether you've used xhprof or Xhgui before, I advise you to take the time to try on your application and you'll be surprised at what you find.