Tutorial on using Xhgui to test PHP performance, xhguiphp performance Tutorial _php Tutorial

Source: Internet
Author: User
Tags install mongodb mongodb driver mongodb version zend framework

Tutorial on using Xhgui to test PHP performance, xhguiphp performance tutorial


Profiling is a technique used to observe the performance of a program, and is ideal for discovering program bottlenecks or strained resources. Profiling is able to penetrate the inside of the program to show the performance of each piece of code in the request process, and also to identify the problematic request (request), and for problematic requests, we can also determine where the performance issue occurred within the request. For PHP, we have a variety of profiling tools, this article is mainly focused on--xhgui, a very good tool. Xhgui is built on top of Xhprof (Xhprof is published by Facebook), but it adds better storage for profiling results, while adding a more favorable information acquisition interface. In this regard, Xhgui is more of a new tool.

Xhgui has experienced several versions of iterations, but the current version provides a more beautiful user interface and uses MongoDB to store its profiling results. All of this is a huge improvement over the previous version, because the previous version was more like a developer designed to use files to save data, making the collected data very difficult to work with. Xhgui 2013 is a very comprehensive profiling tool, both for managers and developers, while Xhgui 2013 is designed to be lightweight enough to run in a production environment.

This article will demonstrate the installation of the program step-by-step, while showing you all the information you can collect using the tool.

First Step: Install dependencies

Because Xhgui has some dependencies, our first step is to solve this problem. All the tutorials below are based on the Ubuntu 13.04 platform, and of course you should be able to adapt them and apply them to your own platform. For now, we need to install MongoDB, PHP, and have some ability to install PECL extensions.

First, we are going to install MongoDB, there are some official installation tutorials here, you can find the details related to your system, but now I will install it by simply using apt:

Aptitude Install MongoDB

The MongoDB version obtained in this way may not be up-to-date because the update speed of this product is really fast. However, if you want to keep it a very new version, you can add the library provided by MongoDB to your package manager so that you can get the latest one.


At the same time, we also need the MONGO driver for PHP. The version of the driver in the warehouse is a bit old, and for today's demo we will get it from the pecl. If you do not have a pecl command on your machine, you can install it using the following command:

Aptitude Install Php-pear

We then add the MongoDB driver to PHP with the following command:

PECL Install MONGO

In order to complete the installation, we finally need to add a new line in the php.ini file. However, the new version of Ubuntu provides a new system for configuring PHP extensions, which is more like the Apache module installation-saving all the configuration in one place and then creating a symbolic link to launch the configuration. First, we create a file to save the settings, although in this example you just need to add a new line to the settings to start the extension. We will save it in file/etc/php5/mods-available/mongo.ini, add the following line:

Php5enmod MONGO
Use pecl again to install the Xhprof extender. The program is currently only a beta version, so the installation command is as follows:

PECL Install Xhprof-beta

The command line again prompts us to add a row to php.ini. We use the same method as above, create the file/etc/php5/mods-available/xhprof.ini, and add the following inside as follows:

Extension=xhprof.so
At this point, we can check that the modules are installed correctly-by running the php-m command at the command line. Remember, don't forget to restart Apache so that the Web interface can enable these extensions.

Installing Xhgui

The Xhgui itself consists primarily of web pages, which provide a more user-friendly interface to the data collected by the xhprof extension. You can clone from the code base GitHub repo, or you can download the zip file directly and then unzip it. After you get the program, determine that the cache directory has sufficient permissions for the Web server to have permissions to write to the file. Finally, run the installation script:

PHP install.php

This is all you need to install the program, and it will automatically install some dependent programs, and if an exception occurs, the installer will give you a hint.

I prefer to install Xhgui in a virtual host, which requires the. htaccess file to allow, and also to start the Rul rewrite. The start URL rewrite indicates that the Mod_rewrite module needs to be started by using the following command:

A2enmod rewrite

(Don't forget to restart Apache). If all goes well, you can normally access the URL of Xhgui and see the following:

Start Xhgui in the virtual host

At this point, we want to start Xhgui to verify the performance of our website. Note that performance tests are best performed once before any optimizations are made to detect the effects of optimizations. The simplest approach is to add the Auto_prepend_file declaration to the virtual host, as shown in:

 
 
  
   
    ServerName example.local   documentroot/var/www/example/htdocs/  php_admin_value auto_prepend_file/var/www/ xhgui/external/header.php   
  
       Options followsymlinks Indexes    allowoverride All   
 
  

Once you're ready, you can start dissecting your site's requests. Xhgui only profiles 1% of site requests, so in order for Xhgui to get meaningful data, you need to have Xhgui run for a while or use a test tool like Apache Bench to submit a batch of requests in bulk. Why would Xhgui only dissect one of the 100 requests? Because Xhgui is designed to be lightweight enough to be used in a production environment, it does not want to incur additional overhead for every request, and a 1% sample rate has been able to provide a clearer overview of the site's overall traffic.

Meet data

I use the test virtual machine to run all the examples in this article, using the Joind.in API as the test code. To generate some traffic, I ran the API test case several times. You can also collect data under certain loads, so you can use Xhgui during stress tests, and you can even collect data using Xhgui on the online site (which sounds crazy, but Facebook has developed the tool for this application). Once a certain request has been sent to the app, the Xhgui is re-accessed, and now it has saved some data:

The graph shows us every request Xhgui analyzed for us, the latest requests are first, and some additional information is presented for each request. This information includes:

    • URL: The URL visited by the request
    • Time: Request initiated
    • Wtor: "Wall Time" – all the Times that the request went through. This is the abbreviation for the "Wall Clock" time, which indicates that the user waits for the request to complete all the Times
    • CPU: CPU time spent on the request
    • MU: The memory consumed by the request
    • PMU: The maximum amount of memory consumed during request processing

In order to get more detailed information about each request ("Run"), you can click on each of the columns you are interested in. You can click on the URL to get the details of all requests for that URL. Either way, you can get more detailed information about the request:

This is a very long and very detailed page, so I quoted two (if all the information will need 5). The left part of the diagram above shows some information about the request to help you track where these statistics relate to, and the main part on the right shows the parts that consume the most time and the memory consumed by each function call during the request. At the bottom of the graph there is a primary key to indicate each column.

The second picture shows a more detailed information about each component of the request. We can see the number of calls per part and the time consumption, including CPU and memory information. Both inclusive and exclusive information are shown in detail: exclusive represents only the consumption of the method call, and inclusive not only includes the consumption generated by this function, but also the consumption of other functions called by this function.


Xhgui Another feature is "Call graph" (callgraph), "Call graph" in vivid virtual way to show how time is consumed:

This is a good demonstration of the hierarchy of function calls. The best thing is that the graph is interactive and you can drag and drop to better see the connection; You can also swipe over the "circle" (BLOB) to see more information. When you interact with it, it's fun to bounce back and move, which is not a very important feature but it makes me feel really fun.

Understanding Data

It's important to have a lot of stats, but it's hard to know where to start. For a page that does not perform as expected, take the following steps: First, the exclusive CPU time for each function is sorted to see the list of functions that consume the most time. Analyze these time-consuming function calls and refactor and optimize them.

Once you have made the changes, let the profiler verify the new version of the program again and test the performance improvements. The Xhgui has built-in perfect tools to compare two runs; Click the "Compare this Run" button in the upper-right corner of the details page. This button will show you the results of each test of the URL, and choose the object you want to compare. For the object you want to compare, click the "Compare" button and the Xhgui will turn to the comparison view as shown in:

The statistics table shows the main differences between the new and previous versions of statistics, including the actual number and percentage of each information change. Shows that the new version of the request waiting time is only 8% of the old version. The statistics table details the changes in each statistic, which we can often see on the "Details" page; You can sort any column to find the information you're interested in.

Once you have successfully refactored in one way, look at the Details page (Detail page) to check the actual effect of the new version and then pick other areas to optimize. Try to sort memory usage or exclusive wall time to optimize for functions that maximize the overall performance of your application. At the same time, do not forget to check the number of calls, a repeated call function has been optimized to multiply the performance of the program.

Optimization method

It's hard to know how much you've improved before you quantify the results, which is why we often detect an application before it is optimized--or how do you know if you really optimized it? We also need to think about how a set of real data should be expressed, otherwise we might move towards a target that is impossible to reach. A useful way to do this is to try to find the most suitable data structure and minimum storage space to use. If you can't run a "Hello World" program in half a second in a working environment you're good at, don't expect a Web page with the same tools to perform well.

The above narrative is not disrespectful to the programming framework (framework); The programming framework exists because it is easy to use, supports rapid development, and is easy to maintain. The performance reduction of the programming framework is the result of a compromise between the various aspects of the program, compared to hand-written code. Using a programming framework for application development is a great way to get online as quickly as possible, and you can use the profiling tool to analyze and improve the performance of your program when needed. For example, many of the modules in the Zend Framework 1 can provide non-good, powerful features, but can be very low, and the profiling tool can be used to identify and replace poorly performing parts. All other frameworks have similar problems, and Xhgui can show you where the problem is and check to see if they have a quantifiable impact on your program.


Outside of your program, some other strategies may be useful sooner or later to seize the upper hand:

    • Beware of non-dangerous slow correlation functions (not-dangerously-slow-but-related functions) appearing on a page. If your page spends 50% of its time in a series of functions in the view helper that the formatting points deal with (I promise this is an imaginary example), then you might want to look at refactoring the entire component.
    • Do less. Try to remove attributes if performance is more important than they are.
    • Beware of content that is generated in a request but not used in a particular view, or that has been regenerated more than once.
    • A good caching strategy. This will be another article about it, but consider using a OpCode cache in PHP (built-in from PHP 5.5), add a reverse proxy to your Web server, and simply send the appropriate cache header for something that doesn't change very often.
    • Violence to decouple. If there is a special feature of a scary resource strain, remove it from your Web server. Perhaps it can be handled asynchronously, so your program can simply add a message to the queue, or move to another separate server and access it as a separate service model. Either way, decoupling will help reduce the load on your Web server while enabling a valid extension.

Xhgui is your friend.

The Xhgui is easy to install, with a great output so that it can be displayed at board meetings. It identifies errors in our application and helps us confirm that the app really works (or not!). )。 This may go through some repetitive process, but whether you've used xhprof or Xhgui before, I'd advise you to take the time to try it on your app and you'll be surprised by what you've found.

http://www.bkjia.com/PHPjc/1026543.html www.bkjia.com true http://www.bkjia.com/PHPjc/1026543.html techarticle A tutorial that uses Xhgui to test PHP performance, xhguiphp performance tutorial profiling is a technique for observing program performance and is ideal for discovering program bottlenecks or strained resources. Prof ...

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.