Introduction
Wikipedia, Facebook, and Yahoo! And other major web properties use the LAMP architecture to provide services for millions of requests per day, web application software such as Wordpress, Joomla, Drupal, and SugarCRM use its architecture to make it easy for organizations to deploy web-based applications.
The advantage of this architecture lies in its simplicity. The. NET stack and Java™Technology may use a large number of hardware, expensive software stacks, and complex Performance Tuning. LAMP stacks can run on commodity hardware and use open source software stacks. Since the software stack is a loose component set rather than a whole stack, performance tuning is a major challenge because each component needs to be analyzed and tuned.
However, there are several simple performance tasks that will have a huge impact on the performance of any scale of websites. In this article, we will explore five of these tasks to optimize the performance of the LAMP application. These projects should rarely require architectural changes to your applications, making them a safe and convenient choice to maximize the responsiveness and hardware requirements required by your web applications.
Use operation code Cache
The simplest way to improve the performance of any PHP application (of course, the "P" in LAMP) is to use an operational code cache. For any website I use, it is a content that I ensure exists, because the performance has a great impact (many times with the operation code cache, the response time can be reduced by half ). But one of the biggest questions most people are not familiar with PHP is why the improvements are so great. The answer is how PHP processes web requests. Figure 1 provides an overview of the PHP request process.
Figure 1. PHP request
Since PHP is an interpreted language, rather than a C or Java compiling language, the entire process of "Resolution-compilation-execution" is executed for each request. You can see why this is time-consuming and resource-consuming, especially when the script rarely changes between requests. After parsing and compiling the script, the script is in the machine-resolvable state as a series of operation codes. This is where the operation code cache is used. It caches these compilation scripts as a series of operation codes to avoid parsing and compiling each request step. You will see in Figure 2 how such a workflow works.
Figure 2. PHP request using the opcode Cache
Therefore, when the cache operation code of the PHP script exists, we can skip the parsing and compilation steps of the PHP request process, directly execute the cache operation code and output the result. Check that the algorithm is responsible for processing the changes you may have made to the script file. Therefore, after the first request of the changed script, the operation code is automatically re-compiled and cached for subsequent requests, replace the cached script.
The opcode cache has been popular with PHP for a long time. Some of the earliest operations should be traced back to the full-lifecycle PHP V4. Currently, some popular options are being developed and used:
- Alternative PHP cache (APC) may be the most popular PHP operation code cache (seeReferences). It was developed by several core PHP developers and has made great contributions, Facebook and Yahoo! The engineers granted the speed and stability. It also supports several other speed improvements for processing PHP requests, including a user cache component, which will be discussed later in this article.
- Wincache is mainly composed of Microsoft®Internet Information Services (IIS) team is actively developing an operation code cache for Windows®(SeeReferences). The main motivation for developing it is to make PHP a first-class development platform on the Windows-IIS-PHP stack, because it is known that APC is not operating well on the stack. It functions very similar to APC and supports a user cache component and a built-in session handler to use Wincache directly as a session handler.
- EAccelerator is one of the original PHP cache Turck MMCache operation code caches (seeReferences. Unlike APC and Wincache, it is only an operator code cache and optimizer, so it does not contain user cache components. It runs on UNIX®It is fully compatible with Windows stacks and is popular for sites that do not intend to use other features provided by APC or Wincache. If you want to use solutions such as memcache to provide a separate user Cache Server for multiple web server environments, this is common.
Undoubtedly, an operation code cache is the first step to accelerate PHP by eliminating the need to parse and compile scripts after each request. After completing step 1, you should see improvements in response time and server load. However, we will discuss more about optimizing PHP.
Optimize Your PHP settings
Although implementing the opcode cache is a major innovation in performance improvement, there are also a large number of other optimization options for you to optimize your php settings based on the settings in the PHP. ini file. These settings are more suitable for production instances; on development or test instances, you may not want to make these changes because it makes debugging of application problems more difficult.
Let's take a look at some projects that are important for performance improvement.
Option to be disabled
Several php. ini settings should be disabled because they are often used for backward compatibility:
register_globals
-Before PHP V4.2, this function is often the default value. The incoming request variables are automatically assigned to common PHP variables. In addition to causing major security issues (mixing the unfiltered incoming request data with the content of common PHP variables), this will incur overhead for each request. Therefore, disabling this setting makes your application safer and improves performance.
magic_quotes_*
-This is another legacy of PHP V4. The imported data automatically avoids risky form data. It aims to organize incoming data before it is sent to the database as a security feature, but it is not very effective because it cannot help users prevent common SQL injection attacks. Because most database layers support preparation statements that can better handle this risk, disabling this setting will eliminate this annoying performance problem again.
always_populate_raw_post_data
-This is only when you need to view the passed-in filter for some reasonPOST
The entire load of data is required. Otherwise, it only stores one copy of POST data in the memory, which is unnecessary.
However, disabling these options on the legacy code is risky because they may depend on their settings for correct execution. You should not develop any new code based on the configured options. If possible, you should seek methods to refactor your existing code to avoid using them.
Set options should be disabled or adjusted
You can enable some excellent performance options of the php. ini file to speed up your script:
output_buffering
-You should ensure that this option is enabled, because it will fl the output back to the browser in block units, ratherecho
Orprint
Statement is the unit, while the latter will greatly slow down your request response time.
variables_order
-This command controls the EGPCS of the incoming request (Environment
,Get
,Post
,Cookie
AndServer
) Variable parsing sequence. If you do not use any super-global variables (such as environment variables), you can safely delete them for a little acceleration to avoid parsing them on every request.
date.timezone
-This is a command added in PHP V5.1, which is used to set the default time zone and then used inDateTime
Function. If you do not set this option in the php. ini file, PHP will execute a large number of system requests to find out what it is, and in PHP V5.3, a warning will be issued for each request.
In terms of settings that should be configured on your production instance, these are seen as "at your fingertips ". For PHP, there is another thing to consider. This is your applicationrequire()
Andinclude()
(Same levelrequire_once()
Andinclude_once()
. These functions optimize your PHP configuration and code to prevent unnecessary File status checks for each request, thus reducing the response time.
Manage yourrequire()
Andinclude()
In terms of performance, File status calling (that is, checking whether a file exists and calling the underlying File System) is quite expensive. One of the biggest culprit in the file status isrequire()
Andinclude()
Statement. These two statements are used to bring the code to the script.require_once()
Andinclude_once()
The same-level invocation is more problematic because they not only need to verify whether the file exists, but also are not included before.
So what is the best way to solve this problem? You can do something to speed up the solution.
- For all
require()
Andinclude()
Use absolute path for calling. This will make PHP clearer about the exact file you want to include, so you do not need to check the entire fileinclude_path
.
- Persistence
include_path
The number of entries in is low. This is hard for everyrequire()
Andinclude()
It is useful to call scenarios that provide absolute paths (usually in large legacy applications) by not checking the location of your files.
APC and Wincache also provide a mechanism for caching PHP File status check results, so repeated file system checks are not required. When you keep the include file names as static rather than variable-driven, they are most effective, so it is useful to try this.
Optimize Your Database
Database optimization will soon become a cutting-edge topic, and I have almost no space to do this fairly here. However, if you are looking to optimize the speed of your database, you should first take some steps, which should be helpful for common problems.
Store the database on your own machine
The database query itself can become quite intense. Generally, it is simple to execute a dataset of reasonable size.SELECT
The statement time limit is set to 100% of the CPU. If your web server and database server are using the CPU time on a single machine, this will undoubtedly slow down your request speed. Therefore, I think the first step is to put the web server and database server on a separate machine to ensure that your database server is more robust (the database server prefers a large amount of memory and multiple CPUs ).
Properly design and compile tabulation Indexes
The biggest problem with database performance may be caused by poor database design and missing indexes.SELECT
A statement is usually the most common Query type running in a typical web application. They are also the most time-consuming queries run on the database server. In addition, these types of SQL statements are the most sensitive to appropriate indexing and database design, so check the following instructions to get the best performance tips.
- Make sure that each table has a primary key. This provides a default order and quick way for tables to join other tables.
- Make sure that the index of any foreign key in one table (that is, the key for linking the record to another table) is properly compiled. Many databases automatically impose constraints on these keys so that the values actually match one record in another table, which helps to get rid of this difficulty.
- Attempts to limit the number of columns in a table. It takes longer to scan too many columns in a table than to query only some columns. In addition, if you have a table that contains multiple columns that is not commonly used
NULL
The Value Field wastes disk space. This is also true for variable size fields such as text or blob, where the increase in table size can far exceed the requirement. In this case, you should consider dividing other columns into different tables and associating them with the primary key of the record.
Analyze queries running on the server
The best way to improve database performance is to analyze what queries are running on your database server and how long it takes to run them. Almost every database has a tool with this function. For MySQL, you can use slow query logs to find problematic queries. To use it, in the MySQL configuration fileslow_query_log
Set to 1, set log_output to FILE, and record them to the FILE hostname-slow.log. You can setlong_query_time
Threshold to determine how many seconds the query must run before it is considered as a "Slow query ". I would like to recommend that you set this threshold to 5 seconds first and reduce it to 1 second over time, depending on your dataset. If you explore this file, you will see a detailed query similar to listing 1.
List 1. MySQL slow query log
/usr/local/mysql/bin/mysqld, Version: 5.1.49-log, started with:Tcp port: 3306 Unix socket: /tmp/mysql.sockTime Id Command Argument# Time: 030207 15:03:33# User@Host: user[user] @ localhost.localdomain [127.0.0.1]# Query_time: 13 Lock_time: 0 Rows_sent: 117 Rows_examined: 234use sugarcrm;select * from accounts inner join leads on accounts.id = leads.account_id; |
The key object we want to consider isQuery_time
To display the time required for the query. Another consideration isRows_sent
AndRows_examined
The number of rows, because this can be the case: If a query looks at too many rows or returns too many rows, it will be incorrectly written. You can further study how to write a query, that is, addEXPLAIN
, Which returns the query plan instead of the result set, as shown in Listing 2.
List 2. MySQLEXPLAIN
Result
mysql> explain select * from accounts inner join leads on accounts.id = leads.account_id;+----+-------------+----------+--------+--------------------------+---------+---| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |+----+-------------+----------+--------+--------------------------+---------+--------| 1 | SIMPLE | leads | ALL | idx_leads_acct_del | NULL | NULL | NULL | 200 | || 1 | SIMPLE | accounts | eq_ref | PRIMARY,idx_accnt_id_del | PRIMARY | 108 | sugarcrm.leads.account_id | 1 | |+----+-------------+----------+--------+--------------------------+---------+---------2 rows in set (0.00 sec) |
More in-depth exploration of MySQL ManualEXPLAIN
Output topic (seeReferences), But one of the important considerations is that the 'type' column is 'all', because MySQL needs to perform a full table scan without the need for keys to perform queries. These help greatly increase the query speed when you add an index.
Effective data caching
As we can see in the previous section, databases often become the biggest pain point for your web application performance. But what if the data you want to query does not change frequently? In this case, a good choice is to store these results locally instead of calling the query for each request.
The two opcode caches APC and Wincache have tools to implement the above operations. You can directly store PHP Data in a shared memory segment for quick query. Listing 3 provides examples.
Listing 3. Example of using APC to cache database results
<?phpfunction getListOfUsers(){ $list = apc_fetch('getListOfUsers'); if ( empty($list) ) { $conn = new PDO('mysql:dbname=testdb;host=127.0.0.1', 'dbuser', 'dbpass'); $sql = 'SELECT id, name FROM users ORDER BY name'; foreach ($conn->query($sql) as $row) { $list[] = $row; } apc_store('getListOfUsers',$list); } return $list;} |
We only need to perform the query once. Then, we push the resultsgetListOfUsers
In the APC cache. From here until the cache expires, you can directly retrieve the result array from the cache and skip the SQL query.
APC and Wincache are not the only choice for user caching. memcache and Redis do not require you to run user caching on the same server as the Web server. This improves performance and flexibility, especially when your web applications expand out across multiple Web servers.
In this article, we explore five simple methods to optimize your LAMP performance. We not only explore PHP-level technologies by using an operational code cache and optimizing PHP configurations, but also explore how to optimize your database design for rational indexing. We also discussed how to use a user cache (taking APC as an example) to demonstrate how to avoid repeated database calls when data does not change frequently.