If the request is sent directly to the back end of the synchronization process, the resource for a process is occupied (such as Apache's Prefork mode) from the time the request is received to the response being sent out. In the case of slow connections, the process is largely spent on meaningless waits, in addition to handling most of the time. The advantage of nginx in this respect lies in its asynchronous non-blocking model. This means that nginx can process and maintain multiple requests simultaneously through an event-based approach, and the backend only needs to do logical calculations, saving waiting time to handle more requests.
If deployed on a machine at the same time, will not improve performance, in the high concurrency performance will be reduced, the original TCP connection is done, because there is a return to the agent to do more, in the case of high concurrency must have a performance loss.
Reverse proxy to improve the performance of the site mainly through three aspects:
1, the reverse proxy can be understood as a 7-tier application layer load balancing, using load balancing can be very convenient scale-out server cluster, to achieve the overall concurrency of the cluster, and improve the ability to stress.
2, usually the reverse proxy server with the local cache function, through the cache of static resources, effectively reduce the pressure on the back-end server, thereby improving performance
3,http compression, after the compression, network traffic decreases, the same bandwidth can serve more users
Nginx supports simple load balancing and fault tolerance
The above describes the Nginx reverse proxy, including the aspects of the content, I hope that the PHP tutorial interested in a friend helpful.