Apache website is faster

Source: Internet
Author: User

1.3 and 2.0 have been optimized to improve processing capability and scalability, and most of the improvements will take effect by default. however, at the time of compilation and running, 2.0 also has many options that can significantly improve the performance.
MPM (Multi-processing modules, multi-channel processing module) is the core feature that affects performance in apache2.0.
It is no exaggeration to say that the introduction of MPM is the most important change in Apache 2.0. As we all know, Apache is based on modular design, while Apache
2.0 extends the most basic functions of modular design to Web servers. the server is loaded with a multi-channel processing module, which is responsible for binding the network port of the Local Machine, accepting requests, and scheduling sub-processes to process requests. extended modular design has two important benefits:
◆ Apache supports multiple operating systems in a more concise and effective manner;
◆ Servers can be customized according to the special needs of the site.
At the user level, mpm looks very similar to other Apache modules. The main difference is that at any time, only one MPM can be loaded into the server.
The following uses Linux Redhat as3 as a platform to demonstrate how to specify MPM in Apache 2.0.
# Wget http://archive.apache.org/dist/httpd/httpd-2.0.52.tar.bz2
# Tar jxvf httpd-2.0.52.tar.bz2
# Cd httpd-2.0.52
#./Configure -- help | grep mpm
The output is as follows: -- With-MPM = MPM choose the process model for Apache to use.
MPM = {BEOs | worker | prefork | mpmt_os2 | perchild | leader | threadpool}
The above operation is used to select the process model to be used, that is, the MPM module. BEOs and mpmt_os2 are the default MPM on BEOs and OS/2, respectively,
Perchild is designed to run different sub-processes with different identities of users and groups. This is especially useful when running multiple virtual hosts that require CGI, compared with suexec in version 1.3.
The Mechanism is better. The leader and threadpool are both worker-based variants and are still in the experimental stage. In some cases, they will not work as expected, so
Apache is not officially recommended. Therefore, we mainly describe the product-level MPM with the greatest relationship between prefork and worker.
How prefork works
If "-- with-MPM" is not used to explicitly specify a certain mpm, prefork is the default MPM on the UNIX platform. The pre-Dispatch mode adopted by it is also
Apache
The pattern used in 1.3. prefork itself does not use threads. Version 2.0 uses it to maintain compatibility with version 1.3. On the other hand, prefork uses a separate sub-process to process different requests, processes are independent of each other, which makes them one of the most stable MPM.
The working principle of prefork is that after the control process initially creates a "startservers" sub-process, it creates a process to meet the needs of the minspareservers settings, waits for one second, creates two more, and then waits for one second, create four more ...... In this way, the number of Created processes is increased exponentially, up to 32 processes per second until
Minspareservers value. This is the origin of prefork. This mode does not have to generate new processes when requests arrive, thus reducing system overhead and increasing performance.
Working principle of worker
Compared with prefork, worker is 2.0
The new version of MPM that supports multi-thread and multi-process hybrid models. because threads are used for processing, a relatively large number of requests can be processed, and the overhead of system resources is smaller than that of process-based servers. however,
Worker also uses multiple processes, and each process generates multiple threads to obtain the stability of the process server. This MPM working method will be the development trend of Apache 2.0.
The working principle of worker is that a "startservers" sub-process is generated by the main control process, and each sub-process contains a fixed threadsperchild
Number of threads. Each thread processes Requests independently. similarly, in order not to generate a thread when the request arrives, minsparethreads and maxsparethreads set the minimum and maximum number of Idle threads, while maxclients sets the total number of threads in all sub-processes. if the total number of threads in the existing sub-process cannot meet the load, the control process will derive a new sub-process.
# Compile and install in worker Mode
#./Configure -- prefix =/usr/local/Apache
-- With-MPM = worker -- enable-So (let it support DSO function, so that the module can be dynamically loaded in the future)
# Make
# Make install
# Cd/usr/local/Apache/Conf
# Vi httpd. conf
Startservers2maxclients
150serverlimit25minsparethreads
25maxsparethreads75threadlimit
25threadsperchild25maxrequestsperchild0
In worker mode, the total number of requests that can be processed simultaneously is determined by the total number of sub-processes multiplied by the value of threadsperchild, which must be greater than or equal to maxclients. if the load is large and the number of existing sub-processes cannot meet the requirements, the control process will derive a new sub-process. the maximum number of sub-processes by default is 16. You must explicitly declare serverlimit when increasing the number (the maximum value is 20000)
Note that if serverlimit is explicitly declared, the value multiplied by threadsperchild must be greater than or equal to maxclients, and maxclients must be an integer multiple of threadsperchild, otherwise, Apache automatically adjusts to a value (which may be an unexpected value ). the following is the author's
Worker configuration section:
Startservers3maxclients2000serverlimit
25minsparethreads50maxsparethreads
200threadlimit200threadsperchild
100maxrequestsperchild0
# Save and exit.
#/Usr/local/Apache/bin/apachectl start
# You can configure APACHE-related core parameters based on actual conditions to achieve maximum performance and stability.
Limit the number of concurrent Apache connections
We know that when a website provides software download via HTTP, if each user enables multiple threads without bandwidth restrictions, the maximum number of connections over HTTP will soon be reached or the network will be blocked, this makes many normal services of the website unavailable. next we add the mod_limitipconn module to control the number of concurrent HTTP connections.
# Wget http://dominia.org/djao/limit/mod_limitipconn-0.22.tar.gz
# Tar zxvf mod_limitipconn-0.22.tar.gz
# Cd mod_limitipconn-0.22
#/Usr/local/Apache/bin/apxs-c-I-a mod_limitipconn.c
# After compilation, mod_rewrite.so is automatically copied to/usr/local/Apache/modules and your httpd. conf file is modified.
# Vi/usr/local/Apache/CONF/httpd. conf
# Add in the last row
# The restricted directory, which indicates the root directory of the host maxconnperip2
# The maximum number of concurrent connections per IP address is 2
# Save and exit.
#/Usr/local/Apache/bin/apachectl start
Prevent file leeching
We have already limited the number of IP concurrency, but if the other party leeching the link to another page, what we just did is meaningless, because he can download it through ant financial or express. so in this case, we need to reference the mod_rewrite.so module. in this way, when a file is leeched, the page is directed to an error page that we have prepared in advance through the mod_rewrite.so module, thus preventing leeching.
#/Usr/local/Apache/bin/apxs-c-I-A/opt/httpd-2.0.52/modules/mappers/mod_rewrite.c
# After compilation, mod_rewrite.so is automatically copied to/usr/local/Apache/modules and your httpd. conf file is modified.
# Vi/usr/local/Apache/CONF/httpd. conf
Rewriteengine onrewritecond % {http_referer }!
^ Http://www.jb51.net/.w.$ [Nc] rewritecond % {http_referer }!
^ Http://www.jb51.net $ [Nc] rewritecond % {http_referer }!
^ Http://jb51.net/.*$ [Nc] rewritecond % {http_referer }!
^ Http://jb51.net $ [Nc] rewriterule .*\\.
(JPG | GIF | PNG | BMP | tar | GZ | RAR | zip | exe) $
Http://www.jb51.net/error.htm [R, NC]
At this point, we have made a comprehensive optimization of Apache, and the performance has been significantly improved. this implementation process has ended successfully. I believe you have read my article Article Later, I have some experiences with Apache optimization. I believe that you will handle emergencies in your work.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.