"Go" Web server Nginx Detailed (theoretical part)

Source: Internet
Author: User
Tags epoll http authentication imap sendfile nginx server

Outline

First, preface

Second, the way the Web server provides services

Three, multi-process, multi-threading, Asynchronous Pattern comparison

Iv. WEB Service Request process

V. Linux I/O model

Six, Linux I/O model specific description

Vii. specific implementation of Linux I/O model

Eight, Apache's working mode

Nine, support high-concurrency Web server

Ten, Nginx detailed

First, preface

Note that before you say a Web server, talk about threads, processes, and the number of concurrent connections.

1. Processes and Threads

A process is a program with a certain independent function, about one run activity on a data set, and a process is an independent unit of the system for resource allocation and scheduling. From a logical point of view, the meaning of multithreading is that in an application (process), there are multiple execution parts that can be executed concurrently. But the operating system does not implement multiple threads as separate applications, but as a process to dispatch and manage as well as resource allocations. This is the important difference between a process and a thread, where the main difference between a process and a thread is that the process has a separate address space, and after a process crashes, it does not affect other processes in protected mode, and the thread is just a different execution path in a process. Thread has its own stack and local variables, but there is no separate address space between the threads, a thread dead is equal to the entire process dead, so the multi-process program is more robust than multithreaded programs, but in the process of switching, the cost of large resources, efficiency is worse. But for some concurrent operations that require simultaneous and shared variables, only threads can be used, and processes cannot be used. OK, I think I have explained the white process with the thread, let's talk about the number of concurrent connections.

2. Number of concurrent connections

(1). What is the maximum number of concurrent connections?

The so-called maximum number of concurrent connections is the maximum number of sessions that the server can handle at the same time.

(2). What is a session?

We open a Web site is a client browser with the server side of a session, and we browse the Web page is based on the HTTP protocol.

(3). How does the HTTP protocol work?

HTTP supports two ways to make connections: non-persistent and persistent (HTTP1.1 the default connection mode is persistent).

(4). The following 7 steps will be completed between the browser and the Web server

    • Establish a TCP connection

    • Web browser sends request command to Web server

    • Web browser sends request header information

    • Web server Answer

    • Web server sends answer header information

    • Web server sends data to browser

    • Web server shuts down TCP connection

In general, once the Web server sends the request data to the browser, it closes the TCP connection, but the browser typically joins this line of code with the header information connection:keep-alive,tcp the connection will remain open after it is sent, so The browser can continue to send requests through the same connection. Maintaining the connection purpose saves the time required to establish a new connection for each request and also saves network bandwidth.

3. How to calculate the number of concurrent connections

    • Download: The user downloads the file on the server, it is a connection, the user file download is complete, the connection disappears. Sometimes users with the multi-threaded way of thunder download, this one user opened 5 threads, even if it is 5 connections.

    • Users open your page, even if the page does not make any request to the server, then the user open one side of the next 15 minutes will be counted an online.

    • If the user continues to open other pages of the same site, the number of people online will be 15 minutes after the user's last click (Making the request), and in this 15 minute, regardless of how the user clicks (including the new window), the person is online.

    • When the user opens the page and closes the browser normally, the user's online number will be cleared immediately.

Second, the way the Web server provides services

A Web server needs to support this multi-tasking approach in some way because it wants to serve multiple customers at the same time. In general, there are three ways to choose, multi-process mode, multi-threaded mode and asynchronous mode. Among them, the multi-process mode of the server to a customer to use a process to provide services, because in the operating system, the generation of a process requires process memory replication, such as additional overhead, so that when the customer more performance will be reduced. To overcome the additional overhead of this build process, you can use multithreading or asynchronous methods. In multithreaded mode, services are provided using multiple threads in a process, and performance increases due to the small overhead of the thread. In fact, it does not require any extra overhead or asynchronously, it communicates with each customer in a non-blocking way, and the server uses a process to poll the line.

Although the asynchronous approach is most efficient, it has its own drawbacks. Because of the asynchronous way, scheduling between multiple tasks is done by the server program itself, and once a problem occurs in one place, the entire server is in trouble. Therefore, to add functionality to such a server, on the one hand to comply with the server's own specific task scheduling, on the other hand to ensure that there is no error in the code, which limits the functionality of the server, so that the asynchronous Web server is the most efficient, but simple function, such as Nginx server.

Because multithreading uses threads for task scheduling, server development is simple and facilitates multi-person collaboration due to standards compliance. However, multiple threads are in the same process and can access the same memory space, so there is a thread-wide impact, and the requested memory must be secure for application and release. For a server system, because it takes days, months or even years of continuous operation, a little bit of error will gradually accumulate and eventually affect the normal operation of the server, it is difficult to write a high-stability multi-threaded server program. However, it is not impossible to do so when. The Apache worker module is a good way to support multithreading.

The advantage of a multi-process approach is stability, because when a process exits, the operating system recycles its resources so that it does not leave any garbage behind. Even if there are errors in the program, because the process is isolated from each other, the error does not accumulate, but is cleared as the process exits. Apache's Prefork module is a module that supports multiple processes.

Three, multi-process, multi-threading, Asynchronous Pattern comparison

There are three ways a Web server provides services in general,

    • Multi-process Mode

    • Multi-threaded Way

    • Async mode

The most efficient is the asynchronous way, the most stable is the multi-process mode, the use of less resources is multi-threaded way.

1. Multi-process

In this architectural approach, the Web server generates multiple processes that process multiple user requests in parallel, and processes can be generated on demand or in advance. Some Web server applications respond by generating a separate process for each user request, but when the number of concurrent requests reaches tens of thousands, multiple concurrently running processes consume a large amount of system resources. (that is, each process can respond to only one request or multiple requests per process)

Advantages:

    • The biggest advantage is stability, and one process error does not affect other processes. For example, the server connects 100 simultaneous requests to 100 processes, one of which fails, kills only one process, and 99 processes continue to respond to user requests.

    • Each process responds to a request

Disadvantages:

    • High process volume, too many process switching times, resulting in inefficient use of CPU resources

    • The address space for each process is independent, with many duplicated data in the space, so memory usage is inefficient

    • Process switching consumes CPU resources due to kernel completion

2. Multithreading

In a multithreaded manner, each thread responds to a request, and because of the sharing of the process's data between threads, the threads are less expensive and performance increases.

Advantages:

    • Sharing process data between threads

    • Each thread responds to a request

    • Thread switching is unavoidable (switching between levels is lightweight)

    • Threads of the same process can share many resources of a process, such as open files

    • The need for memory has fallen considerably compared to the process

    • Read can be shared, write cannot be shared

Disadvantages:

    • Thread jitter when threads switch quickly

    • Multithreading can cause server instability

3. Async mode

A process or thread responds to multiple requests, does not require any additional overhead, has the highest performance, and consumes the least resources. But there are problems. One, however, a process or thread error can lead to downtime for the entire server.

Iv. WEB Service Request process

In the above explanation, we explain how the Web server, how to provide services, multi-process way, multi-threaded way and asynchronous way we first understand, we certainly have a lot of questions, we first doubt, we slowly say, now we no matter how the Web server is to provide services, many processes, Multithreading is good, asynchronous. Let's take a look at the specific process of a client-specific request Web service, from which we can see that there are 11 steps, let's talk about it in detail,

    • First, our client sends a request to the Web server, and the request is first to the NIC.

    • The NIC will request the kernel space kernel processing, in fact, is unpacking, found that the request is 80 port.

    • The kernel sends the request to the Web server in the user space, and the Web server accepts the Index.html page that requests the discovery client,

    • The Web server makes a system call to send the request to the kernel

    • The kernel discovers that on the request is a page, then calls the disk driver, the connection disk

    • The kernel obtains the paging file by driving the calling disk

    • The kernel saves the resulting paging file in its own cache area and notifies the Web process or thread to fetch the corresponding paging file

    • The Web server copies the paging files in the kernel cache to the process cache area through system calls

    • The Web server obtains the paging file in response to the user, and again sends the paging file to the kernel via system call

    • Encapsulation of the kernel process paging file and sending it through the network card

    • When the message arrives at the NIC, it responds to the client via the network

Simply put: The user requests--to the user space--the system calls--the kernel space--and the kernel-to-disk read page resource--returns to user space--response to the user. The above simple explanation of the client to the Web service request process, in this process, there are two I/O process, one is the client requested network I/O, and the other is the Web server request page disk I/O. Let's talk about the I/O model for Linux.

V. Linux I/O model

1.I/O Model Classification

Description: We all know that the Web server's process responds to a user request, but cannot directly manipulate the I/O device, which must request kernel to assist with the I/O action through system calls
The kernel maintains a buffer for each I/O device, such as:

For data entry, it takes time to wait for (wait) data to enter buffer, while copying data from buffer to process takes time
Depending on the wait mode, I/O actions can be divided into five modes.

    • Blocking I/O

    • Non-blocking I/O

    • I/O multiplexing (SELECT and poll)

    • Signal (Event) driver I/O (SIGIO)

    • asynchronous I/O (Aio_ series functions for Posix.1)

Terminology related to 2.I/O models

It is necessary to explain the concept of blocking, non-blocking, synchronous, asynchronous, I/O first.

(1). Blocking and non-blocking:

Blocking and non-blocking refers to whether an operation is performed until the end of the operation, or return immediately. For example, you go to the station to meet friends, this is an operation. There are two ways to do this. The first kind, you this person special honest, old already arrived at the station to wait for the car to arrive the friend so far. The second, you arrive at the station, ask the train of duty to come no, "not yet", you go out to stroll around, may come back to ask again. The first is blocking, and the second is non-blocking. I think blocking and non-blocking is about doing things, it's about people who do things.

(2). Synchronous and Asynchronous:

Synchronization and Asynchrony are another concept, which is a property of the event itself. For example, the boss let you to carry a pile of stone, and only let you do it alone, you have to go, the final result is to move the end, or you hit the foot, only the move is finished you know. This is the synchronous event. If the boss also give you a younger brother, you can let the younger brother to move, the move finished Sue you. This becomes asynchronous. In fact, asynchronous can be divided into two types: with notification and without notification. The one in front of you belongs to the one with the notice. Some younger brother work may not be enough initiative, do not actively inform you, you need to pay attention to the state from time to time. This is asynchronous without notification.

For synchronized events, you can only do this in a blocking manner. For asynchronous events, blocking and non-blocking are all possible. There are two ways of non-blocking: actively querying and passively receiving messages. Passive does not mean that it is not good, here it is precisely the more efficient, because in the active query most of the query is doing no work. Both are available for asynchronous events with notifications. And for without notice, you can only use active query.

(3). / o

Back to I/O, whether I or O, access to peripherals (disks) can be divided into two phases: request and execution. The request is to look at the status information of the peripheral (such as whether it is ready), and execution is the real I/O operation. Prior to Linux 2.6, only "request" was an asynchronous event, and after 2.6, AIO was introduced to "execute" asynchronously. Although Linux/unix is used to do server, this is a lot behind windows, IOCP (AIO on Windows) on the Win2000 on it, hehe.

(4). Summary

The "execution" phase of the top four I/O models on Linux is synchronous, and only the last one achieves true full asynchrony. The first type of blocking is the most primitive method and the most exhausting way. Of course tired and not tired to see who. The application is dealing with the kernel. This is the most tiring way for an application, but it is the easiest way for the kernel to do this. Also take pick this thing as an example, you are the application, the attendant is the kernel, if you go to have been waiting, the attendant on the convenient. Of course now the computer design, including the operating system, more and more for the end user to consider, in order to satisfy the user, the kernel slowly assume more and more work, the evolution of the IO model is the same.

Non-blocking I/O, I/O multiplexing, and signal-driven I/O are all non-blocking, of course, for the "request" phase. Non-blocking is the active querying of peripheral state. The select,poll in I/O multiplexing is also active query, the difference is that select and poll can simultaneously query the state of multiple FD (file handle), and select has the limit of FD number. Epoll is based on the callback function. The signal-driven I/O is based on the signal message. These two should be in the "passive Receive message" category. Finally, the emergence of the great AIO, the kernel has done everything, the application of the upper layer to achieve full-asynchronous, the best performance, of course, the highest degree of complexity. Well, let's talk about these patterns in detail.

Six, Linux I/O model specific description

First, let's take a look at the simple matrix of the basic Linux I/O model,

The models we can see are synchronous blocking I/O (blocking I/O), synchronous non-blocking I/O (non-blocking I/O), asynchronous blocking I/O (I/O multiplexing), asynchronous non-blocking I/O (two types, signal-driven I/O, and asynchronous I/O). All right, now, let's talk about it in detail.

1. Blocking I/O

Description: The application invokes an IO function that causes the application to block and wait for the data to be ready. If the data is not ready, wait until the data is ready, copy from the kernel to the user space, and the IO function returns a success indicator. This doesn't have to be explained, blocking sockets. Is the diagram of the procedure it invokes: (Note that the general network I/O is blocking I/O, the client makes the request, the Web server process responds, and the request is in a waiting state until the process does not return the page)

2. Non-blocking I/O

We set up a set of interfaces to non-blocking is to tell the kernel, when the requested I/O operation cannot be completed, do not sleep the process, but return an error. This way our I/O operations function will constantly test whether the data is ready, and if not, continue testing until the data is ready. During this ongoing testing process, CPU time is heavily consumed, and the I/O model is not used by all general Web servers. Specific processes such as:

3.I/O multiplexing (SELECT and poll)

The I/O multiplexing model uses the Select or poll function or the Epoll function (which is supported by the Linux2.6 kernel), which also blocks the process, but unlike blocking I/O, these two functions can block multiple I/O operations at the same time. I/O functions can be detected at the same time for multiple read operations, multiple write operations, and I/O operation functions are not actually invoked until there is data readable or writable. Specific processes such as:

4. Signal-driven I/O (SIGIO)

First, we allow the socket interface to drive the signal-driven I/O and install a signal processing function, and the process continues to run without blocking. When the data is ready, the process receives a sigio signal that can be called by the I/O operation function in the signal processing function to process the data. Specific processes such as:

5. Asynchronous I/O (Aio_ series functions for Posix.1)

When an asynchronous procedure call is made, the caller cannot get the result immediately. The part that actually handles the call notifies the caller of the input-output operation through state, notification, and callback after completion. Specific processes such as:

6.I/O model Summary (e.g.)
0987654

As we can see, the more backward, the less the blockage, the theoretical efficiency is also the best. Of the five I/O models, the first three are synchronous I/O, the latter of which are asynchronous I/O.

Synchronous I/O:

    • Blocking I/O

    • Non-blocking I/O

    • I/O multiplexing (SELECT and poll)

asynchronous I/O:

    • Signal-driven I/O (SIGIO) (semi-asynchronous)

    • asynchronous I/O (posix.1 Aio_ series functions) (True async)

Differences between asynchronous I/O and signal-driven I/O:

    • In the signal-driven I/O mode, the kernel can copy the notification to our application when sending a Sigio message.

    • In asynchronous I/O mode, the kernel notifies our application only after all operations have been completed by the kernel operation.

Well, the comparison between the 5 models is clear, let's take a look at the implementation of the five models.

Vii. specific implementation of Linux I/O model

1. There are several main ways to achieve this:

    • Select

    • Poll

    • Epoll

    • Kqueue

    • /dev/poll

    • Iocp

Note, where IOCP is implemented by Windows, select, poll, Epoll are implemented by Linux, Kqueue is implemented by FreeBSD, and/dev/poll is implemented by Sun's Solaris. Select, poll corresponds to the 3rd type (I/O multiplexing) model, IOCP corresponds to the 5th (asynchronous I/O) model, then Epoll, Kqueue,/dev/poll? In fact, the same model as SELECT, just a bit more advanced, can be seen as having a 4th (signal-driven I/O) model of some features, such as the callback mechanism.

2. Why Epoll, Kqueue,/dev/poll than select Advanced?

The answer is that they are not polled. Because they replaced it with callback. Think about it, when the socket is more, each time select () through the traversal fd_setsize a socket to complete the dispatch, regardless of which socket is active, all traverse over again. This can waste a lot of CPU time. If you can register a callback function with the socket, and when they are active, the related operation is done automatically, which avoids polling, which is what Epoll, Kqueue,/dev/poll do. This may not be easy to understand, then I say a realistic example, if you are in college, living in the dormitory building has a lot of rooms, your friends will come to you. Select version of the tube aunt will take your friends to the room to find, until you find you. and epoll version of the house tube aunt will first write down each classmate's room number, when your friend comes, just tell your friend you live in which room can, do not have to personally take your friend filled the building to find someone. If there are 10,000 people, you have to find their own living in this building students, select version and Epoll version of the aunt, who is more efficient, self-evident. Similarly, in high concurrent servers, polling I/O is one of the most time-consuming operations, and the performance of Select, Epoll, and/dev/poll is also very clear.

3.Windows or *nix (IOCP or Kqueue, Epoll,/dev/poll)?

Admittedly, Windows IOCP is very good and there are few systems that support asynchronous I/O, but because of the limitations of the system itself, large servers are still under UNIX. And as mentioned above, Kqueue, Epoll,/dev/poll and IOCP are more than a layer of blocking from the kernel copy data to the application layer, and thus cannot be counted as asynchronous I/O classes. However, this layer of small blockage insignificant, kqueue, Epoll,/dev/poll have done very good.

4. Summarize some of the highlights

    • Only IOCP (Windows implementation) is asynchronous I/O, and other mechanisms are more or less blocked.

    • Select (Linux implementation) is inefficient because it requires polling every time. But inefficient is also relative, depending on the situation, but also through a good design to improve

    • Epoll (Linux implementation), Kqueue (FreeBSD implementation),/dev/poll (Solaris Implementation) are Reacor modes, IOCP is Proactor mode.

    • Apache 2.2.9 supports only select models before 2.2.9 support for Epoll models

    • Nginx Support Epoll Model

    • Java NIO Package is a select model

Eight, Apache's working mode

1.apache Three modes of operation

We all know that Apache has three types of work modules, prefork, worker, event, respectively.

    • Prefork: Multi-process, each request with a process response, this process will use the Select mechanism to notify.

    • Worker: Multi-threaded, a process can generate multiple threads, each thread responds to a request, but the notification mechanism is select but can accept more requests.

    • Event: Based on the asynchronous I/O model, a process or thread, each process or thread responds to multiple user requests, and is implemented based on the event-driven (that is, the epoll mechanism).

How the 2.prefork works

If you do not explicitly specify a mpm,prefork with "--with-mpm", it is the default MPM on the UNIX platform. The pre-derived subprocess that it uses is also the pattern used in Apache1.3. Prefork itself is not used to threads, Version 2.0 is used to maintain compatibility with version 1.3, and on the other hand, prefork processes different requests with separate sub-processes, and the process is independent of each other, making it one of the most stable mpm.

How the 3.worker works

Compared to Prefork,worker is the new MPM in version 2.0 that supports multi-threaded and multi-process hybrid models. Because of the use of threads for processing, you can handle a relatively large number of requests, while system resources are less expensive than process-based servers. However, the worker also uses multiple processes, Each process generates multiple threads to obtain stability based on the process server. This mpm's way of working will be the trend of Apache2.0 development.

4.event feature based on event mechanism

A process responds to multiple user requests, uses the callback mechanism to make the socket reusable, requests that the process does not process the request, but directly to other mechanisms to handle it, through the Epoll mechanism to notify the request whether it is completed, and in this process, the process itself is idle and can always receive user requests. A process can be implemented to respond to multiple user requests. Supports a large number of concurrent connections and consumes less resources.

Nine, support high-concurrency Web server

There are several basic conditions:

1. Thread-based, that is, one process generates multiple threads, and each thread responds to each request of the user.

2. The event-based model, a process processes multiple requests, and notifies the user that the request is complete through the epoll mechanism.

3. Disk-based AIO (asynchronous I/O)

4. Support Mmap memory mapping, mmap the traditional Web server, when the page input, the disk page is entered into the kernel cache, and then copied from the kernel cache to the Web server, mmap mechanism is to let the kernel cache and disk mapping, Web server, Copy the contents of the page directly. It is not necessary to first enter the page on the disk into the kernel cache.

Just right, Nginx supports all of the above features. So Nginx official online said, Nginx support 50000 concurrency, there is a basis. Well, here's the basics. Let's talk about the key nginx we're explaining today.

Ten, Nginx detailed

1. Introduction

Traditionally, a Web service based on a process or threading model schema processes concurrent connection requests through each process or per thread, which is bound to block during network and I/O operations, with the other corollary being low utilization of memory or CPU. Generating a new process/thread requires that its runtime environment be prepared beforehand, which includes allocating heap memory and stack memory for it, and creating a new execution context for it. These operations require CPU usage, and too many processes/threads can cause thread jitter or frequent context switching, which can further degrade the system performance. Another high-performance Web server/web server reverse proxy: nginx (Engine X), Nginx's main focus is its high performance and high-density utilization of physical computing resources, so it adopts different architectural models. Inspired by the advanced processing mechanism based on "events" in various operating system designs, Nginx uses a modular, event-driven, asynchronous, single-threaded and non-blocking architecture, and uses a large number of multiplexing and event notification mechanisms. In Nginx, connection requests are handled by a handful of process workers with only one thread in an efficient loopback (run-loop) mechanism, and each worker can process thousands of concurrent connections and requests in parallel.

2.Nginx Working principle

Nginx runs multiple processes on demand simultaneously: one master process (master) and several worker processes (worker), cache loader processes (caches loader) and cache Manager processes (cache manager), and so on when the cache is configured. All processes contain only one thread, and the process of interprocess communication is achieved primarily through the "shared memory" mechanism. The main process runs as root, and the worker, cache loader, and cache manager should run as non-privileged users.

The main process is to complete the following tasks:

    • Read and verify positive configuration information;

    • Create, bind, and close sockets;

    • The number of worker processes to start, terminate, and maintain;

    • Reconfiguration of the operating features without aborting the service;

    • Control non-disruptive program upgrades, enable new binaries, and roll back to older versions when needed;

    • Re-open the log file;

    • compiling embedded Perl scripts;

The main tasks that the worker process accomplishes include:

    • Receiving, passing in and processing connections from the client;

    • Provide reverse proxy and filtering function;

    • Any other tasks that can be done by nginx;

Note, if the load is CPU-intensive, such as SSL or compression applications, the number of workers should be the same as the number of CPUs, if the load is mainly IO-intensive, such as responding to a large number of content to the client, the number of workers should be 1.5 or twice times the number of CPUs.

3.Nginx Architecture

Nginx code is composed of a core and a series of modules, the core is mainly used to provide the basic functions of Web server, as well as web and mail reverse proxy functions, but also to enable network protocols, create the necessary runtime environment and ensure smooth interaction between different modules. However, most of the functions associated with the protocol and the functionality specific to an application are implemented by Nginx's modules. These functional modules can be broadly divided into event modules, phased processors, output filters, variable processors, protocols, upstream, and load balancing in several categories, which together make up Nginx's HTTP functionality. The event module is primarily used to provide OS independent (different operating system event mechanisms differ) event notification mechanisms such as kqueue or Epoll. The Protocol module is responsible for implementing Nginx through HTTP, Tls/ssl, SMTP, POP3, and IMAP to establish a session with the corresponding client. Within Nginx, interprocess communication is implemented through the pipeline or chain of modules; in other words, each function or operation is implemented by a single module. For example, compressing, communicating with the upstream server through the fastcgi or UWSGI protocol, and establishing a session with memcached.

4.Nginx Basic Functions

    • Processing static files, index files and automatic indexing;

    • Reverse proxy acceleration (no caching), simple load balancing and fault tolerance;

    • FastCGI, simple load balancing and fault tolerance;

    • Modular structure. Filters include gzipping, byte ranges, chunked responses, and Ssi-filter. In the SSI filter, multiple sub-requests to the same proxy or FastCGI are processed concurrently;

    • SSL and TLS SNI support;

5.Nginx IMAP/POP3 Agent service function

    • Redirect users to the IMAP/POP3 backend using an external HTTP authentication server;

    • Use an external HTTP authentication server to authenticate the user after the connection is redirected to the internal SMTP backend;

    • Authentication method:

    • Pop3:pop3 User/pass, APOP, AUTH LOGIN PLAIN cram-md5;

    • Imap:imap LOGIN;

    • Smtp:auth LOGIN PLAIN cram-md5;

    • SSL support;

    • STARTTLS and STLS support in IMAP and POP3 modes;

6.Nginx Supported operating systems

    • FreeBSD 3.x, 4.x, 5.x, 6.x i386; FreeBSD 5.x, 6.x AMD64;

    • Linux 2.2, 2.4, 2.6 i386; Linux 2.6 amd64;

    • Solaris 8 i386; Solaris 9 i386 and sun4u; Solaris Ten i386;

    • MacOS X (10.4) PPC;

    • Windows-compiled versions support Windows-series operating systems;

7.Nginx Structure and expansion

    • A master process and multiple worker processes, the worker process runs on a non-privileged user;

    • Kqueue (FreeBSD 4.1+), Epoll (Linux 2.6+), RT Signals (Linux 2.2.19+),/dev/poll (Solaris 7 11/99+), select, and poll support;

    • The different features supported by Kqueue include Ev_clear, ev_disable (temporary Forbidden event), Note_lowat, ev_eof, number of valid data, error codes;

    • Sendfile (FreeBSD 3.1+), Sendfile (Linux 2.2+), Sendfile64 (Linux 2.4.21+), and Sendfilev (Solaris 8 7/01+) support;

    • Input filtering (FreeBSD 4.1+) and tcp_defer_accept (Linux 2.4+) support;

    • 10,000 the inactive HTTP keep-alive connection requires only 2.5M of memory.

    • Minimal data copy operation;

8.Nginx Other HTTP Features

    • Virtual Host service based on IP and name;

    • GET interface of Memcached;

    • Support keep-alive and pipeline connection;

    • Flexible and simple configuration;

    • Reconfiguration and online upgrade without interrupting customer's work process;

    • Customizable access logs, log write cache, and fast log back volumes;

    • 4XX-5XX error code redirection;

    • Rewrite rewrite module based on PCRE;

    • Access control based on client IP address and HTTP Basic authentication;

    • PUT, DELETE, and Mkcol methods;

    • Support for FLV (Flash video);

    • Bandwidth limit;

9. Why Choose Nginx

    • In the case of high connection concurrency, Nginx is a good alternative to Apache server: Nginx in the United States is a virtual hosting business owners often choose one of the software platform. Capable of supporting up to 50,000 concurrent connections, thanks to Nginx for choosing Epoll and Kqueue as the development model.

    • Nginx as a Load Balancer server: Nginx can either directly support the internal Rails and PHP programs outside the service, also can support as an HTTP proxy server external services. Nginx is written in C, which is much better than Perlbal, both in terms of system resource overhead and CPU efficiency.

    • As a mail proxy: Nginx is also a very good mail proxy server (one of the first to develop this product is also as a mail proxy server), Last.fm describes the success and wonderful use of experience.

    • Nginx is a [#installation installation] is very simple, the configuration file is very concise (also can support Perl syntax), Bugs very few servers: Nginx boot is particularly easy, and can be almost uninterrupted operation, even if the run for several months do not need to restart . You will also be able to upgrade the software version without interruption of service.

    • Nginx's birth mainly solves c10k problem

Okay, here's the theory part of the nginx here, in the next blog post we will explain in detail, nginx installation and application. I hope you have something to gain ...

This article is from "Share your knowledge ..." Blog, be sure to keep this source http://freeloda.blog.51cto.com/2033581/1285332

"Go" Web server Nginx Detailed (theoretical part)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.