Linux-nginx's Sendfile and context switches

Source: Internet
Author: User
Tags sendfile switches

Today, when looking at the Nginx thread pool, frequently see sendfile, in fact, often see sendfile, but I usually selectively ignore it ... First, Sendfile, tomorrow in a good chat on the nginx thread pool Some of the sudden, through the pseudo-official blog introduced how to use the Nginx thread pool AIO, achieve 9 times times the performance .... Personally, his core concept is to throw a block or logic that you think is plugged into the thread pool, and then take the new task, get results from the thread pool, and then return ...  But if used in Nginx most powerful static file processing, I personally feel that the promotion is not big, after all, you even undertake more client requests, but you finally return the logic of the data, or in the preemptive blocking ...    After all, the capacity of disk IO is limited .... This is only a personal understanding, if there is a wrong place, please spray ...

Don't pull the nginx thread pool, go back to sendfile ...

The popular Web server is now available with the Sendfile option to improve server performance

location/xiaorui.cc/{
Sendfile on;
Tcp_nopush on;
AIO on;
}

So what Sendfile is, how does he affect performance ... sendfile is actually a system call that linux2.0+ later launches, and the Web server can decide whether to use Sendfile as a system call by adjusting its configuration. Let's take a look at the traditional network transfer process without sendfile:

Read (file,tmp_buf, Len);
Write (Socket,tmp_buf, Len);

HDD >> kernel buffer >> user buffer>> kernel socket buffer >> protocol stack


A socket-based service, first read the hard disk data, and then write data to the socket to complete the network transmission. The above 2 lines explain this in code, but the above 2 lines of simple code mask Many of the underlying operations. Let's see how the bottom line executes the above 2 lines of code:

1. The system calls read () to produce a context switch: switch from user mode to kernel mode, then perform a copy of the DMA and read the file data from the hard disk into a kernel buffer.
2. The data is copied from kernel buffer to user buffer, and then the system calls read () to return, then a context switch is generated: switch from kernel mode to user mode.
3. The system call write () produces a context switch: switch from user mode to kernel mode, and then read step 2 to the data copy of user buffer to kernel buffer (the 2nd copy of the data to kernel buffer), but this time A different kernel buffer, this buffer is associated with the socket.
4. The system calls write () to return a context switch: switch from kernel mode to user mode, and then DMA copies data from kernel buffer to the protocol stack.
The above 4 steps have 4 context switches and 4 copies, and we find that reducing the number of switches and the number of copies will effectively improve performance. In the kernel2.0+ version, System call Sendfile () is used to simplify the above steps to improve performance. Sendfile () Not only reduces the number of transitions but also reduces the number of copies.
Take a look at the process of using sendfile () for network transmission:
Sendfile (Socket,file, Len);

HDD >> kernel buffer (fast copy to Kernelsocket buffer) >> protocol stack

1, the system call Sendfile () through the DMA to copy the hard disk data to kernel buffer, and then the data is kernel directly copied to another socket-related kernel buffer. There is no switch between user mode and kernel mode, and a copy from buffer to buffer is completed directly in the kernel.
2. DMA copies the data directly from the Kernelbuffer to the protocol stack, does not switch, and does not require the data to be copied from user mode to kernel mode, because the data is in the kernel.

Simply put, Sendfile is a more performant system interface than read and write, but it is important to note that Sendfile is sending in_fd content to out_fd. IN_FD cannot be a socket, which is only a file handle. So when Nginx is a static file server, opening the SENDFILE configuration item can greatly improve the performance of Nginx. But when Nginx is used as a reverse proxy, SENDFILE is useless, because Nginx is the reverse proxy. IN_FD is not a file handle but a socket, which does not conform to the Sendfile function's parameter requirements.

Linux-nginx's Sendfile and context switches

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.