Web single-Machine optimization

Source: Internet
Author: User
Tags ack sendfile

Have to start blogging, visual and another week, of course, this is not the Python front end, personal like notepad++ pity can not put pictures, word what is too annoying

Why should single-machine optimization, very simple, because regardless of the future is a variety of clusters, or physical machine virtual machine, only the personal advantage to maximize to improve the overall minimum, because of the principle of the cask; another, poor, playing Linux that is to optimize, the performance of the most pressing operating system. Clusters are evolved from a single machine, so, optimization of a single machine is the initial conditions you continue the next step

Let's look at the points we can optimize from the overall flow of a request connection (operational angle)

In fact, each step of the speed of the improvement of our performance has improved (at this time the performance refers to the user's objective feeling, is to open your Web page quickly unpleasant), every time the site changed card, the leader said to hurry to add a bit of bandwidth, this sentence is also right, the bandwidth is big, the data sent less time, Plus bandwidth is certainly good for performance improvement, so now know that your boss is actually very professional, right ~ ~ ~

Well, do not sell Moe, as a professional operation, we certainly want to spend the right time money in the most suitable position of ascension, which is called the "bottleneck point", so do optimization, find the "bottleneck point" the most critical! But it is a pity that we can only do the optimization from our own server, because to find the "bottleneck point" this is a very real thing, on paper, meaningless.

So, go to today's topic, single-Machine Web performance optimization!

Completely single-machine Web is actually not easy to optimize, because all the services need to be placed on a machine, the common is that we like the LNMP, a computer to install Nginx, PHP, MySQL, there may be some other what ghosts, so, I can only do some general optimization.

I. Considerations from the system itself

First from our server itself, we need to use ulimit this familiar command to increase the number of files that each process can open to 65535 (the default value of 1024) and change the total number of files that the system can open, why change this thing? Because on Linux everything are file,so,we must do it! Still do not understand, well, every time a connection is created, this allows the connection to open a number of files, even if at least we have to open a small socket, so that it will occupy a file, if there is too much connection, and a single task on the limit only allowed to open 1024 files, then the subsequent connection will not come in. Similarly, if the number of files open more than the system itself allows the upper limit can not, so File-max is also particularly important, both indispensable!

1[Email protected] ~]# Ulimit-N2 1024x7683[Email protected] ~]# ulimit-n655354[Email protected] ~]# Ulimit-N5 655356[Email protected] ~]#Cat/proc/sys/fs/file-Max7 1817958[Email protected] ~]#Echo "Fs.file-max = 6553560">>/etc/sysctl.conf
Two. The type of service provided is considered

We provide Web services externally, then I must go is http/https connected, internal my own business needs to connect their own database, so, overall I go is TCP connection, so we first review the classic three-grip four swing

Process and three handshake related not to say just a few, directly summed up the wave important

Time_wait occurs after the client actively disconnects

Close_wait occurs after the server is actively disconnected

In fact, two wait is not very good, and they will be left after the disconnection of a 2-minute file, which occupies the above can open the number of files resources! So, we make the relevant optimization from this point, first of all, we have to think about why the connection is broken, the default is still retained state, write TCP protocol This buddy is not silly, is not silly, you think he is not stupid? In fact, they are only considering some problems based on the environment at that time:

    1. Prevent the last connected packet from getting lost in the network transmission, take it back.
    2. TCP is not UDP, to ensure reliability! The last ACK (FIN) sent at the active shutdown is likely to be lost, when the passive side will resend fin, and if the active side is in the CLOSED state, it will respond to RST rather than ACK. So the active side should be in a time_wait state, not a closed!.
    3. In addition, this design time_wait will periodically recycle resources, and will not occupy a lot of resources, unless a short period of time to accept a large number of requests or be attacked

Of course, this is based on the network environment of the year under consideration, 1981 network condition about 4 minutes you transmit a byte! So does not explain (how do I know?) Because I will travel through time and space, I just went to buy a few bottles of sprite for 82 years ~), but this is not the reason why we do not optimize it, so, we have three kernel parameters to solve the problem.

    1. Tcp_tw_reuse reuse, after all, every time a TCP connection is required to consume resources, if time-wait sockets can be reused that's really good, we can open it anywhere
    2. Tcp_tw_recycle fast ecstasy, it will be very strong in the intranet environment, you can quickly reclaim time_wait sockets, but it has a problem is not to open on the NAT load balancer
    3. Tcp_timestamps is the basic condition of the above two options, but it is already open by default, it is a timestamp, we need an identity to determine whether sockets should be recycled or reused

1[Email protected] ~]#Cat/proc/sys/net/ipv4/Tcp_tw_reuse2 03[Email protected] ~]#Cat/proc/sys/net/ipv4/Tcp_timestamps4 15[Email protected] ~]#Cat/proc/sys/net/ipv4/tcp_tw_recycle6 07[Email protected] ~]#Echo 1>/proc/sys/net/ipv4/Tcp_tw_reuse8[Email protected] ~]#Echo 1>/proc/sys/net/ipv4/tcp_tw_recycle

File limitation optimization now looks like this is very good oh, now we do not consider this simple is to improve the concurrency of the web, how to break, we all know that after the establishment of the connection is two machine port between the mutual damage, so, if we can open some ports, some people will ask, multi-beginning port that is the client's problem, You serve open a 80 others even you do not go this, this is the charm of architecture, in you are the server, you are the client, you provide the Web is simply a HTML, do not need to interact with the database, do not need to take static resources, static resources may also have a server, so, any heavy and road far Ah, There are some ports that you can open that's a good thing.

1 cat /proc/sys/net/ipv4/232768    609993 echo  "10000 65530" >/proc/sys/net/ipv4/ip_local_port_range
Three. Component separation

When the service is completely on a machine, we will certainly be separated as the volume of traffic increases, typically separating the database before the static resources are separated.

All of a sudden neat a lot, we have 3 servers, in the purchase of the server when we can choose cost-effective servers for the business, our application server does not need to store anything, so disk what can be simple, database that must SSD, and now is very cheap, The upgrade to Io is extremely obvious, and the file server, with the SSD, but the CPU and memory do not need too much attention.

When our machine has been separated, we have brought an increase in concurrency, each service is to account for the file descriptor, and now separate, the respective accounts of the machine, no one to rob you, there are ports to use.

We first optimize the database and the file server, they all need to do a lot of IO operations on the disk, so we have to adjust the disk scheduling mode, Linuxio dispatch Everybody knows, the basic knowledge

CFQ: Completely fair, default option

Noop: Don't do anything, SSDs use this

Deadline: There is a deadline, database commonly used

So, it is obvious that our file server after using the SSD disk will be able to change its drive letter scheduling to NoOp, and the Web server may be logging the log, so the use of the default CFQ is good. and the database server uses deadline, then ask the use of the SSD database instead of deadline or NoOp? The answer is all right, in fact, the performance difference between the two is not small, but because of the nature of the database, set a deadline is more reasonable.

Finally, we optimize our web server, with Nginx as an example,

Advanced Configuration:

Worker_processes Auto;

Events Configuration:

Use Epoll;

Worker_connections 65535

HTTP configuration:

Sendfile on;

Tcp_nopush on;

Tcp_nodelay on;

The first few will not say, the meaning of Tcp_nopush and tcp_nodelay is relative, but in the case of open sendfile does not conflict, simply said the final effect is to send the data has been accumulated to the maximum message length before sending, If it is the last package then it will be sent directly without waiting

Web single-Machine optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.