Research on the configuration and deployment of high performance Web server Nginx (11) Application module of the Memcached module two major scenarios

Source: Internet
Author: User
Tags file url nginx reverse proxy

First, Application Scenario 1

Recently in a project, the use of Nginx memcached module, so in this series of tutorials in advance to put the memcached module to write. In addition, found recently my blog article frequently by a lot of web site with collectors take away, help me to carry forward, do not listen to my say thank you. Here is still hope that my blog post is reproduced when can be annotated source, to meet my small vanity.


There is now an application scenario:

Client clients access server servers through Nginx reverse proxy. Each access to the content of the file is uploaded to the server, and then can access to file URL is broadcast to all the client, each client loaded file.


Analysis:

With so many clients loading file at the same time, the pressure on the server must be great, right? Reader friends will certainly say, with Nginx reverse proxy, client access to the server, the corresponding access to the resources can be nginx cache, reduce the pressure of the server. One thing to note, however, is that if this is the case, the cache of the Nginx reverse proxy configuration is cached to Nginx from the server when there is client access to Nginx. However, after the broadcast, all the client loading file is simultaneous, if an access request arrives and makes Nginx cache, the other client receives the post-broadcast load response, I am afraid it has already started and is nearing the end. The load pressure still falls on the server. What do we do?


Solution Overview:

When a client uploads file to the server, writes the file content to the Nginx cache, and then broadcasts the client's request to load file to the cache when it arrives at Nginx, then the base is 100% hits.


Deployment Solution:

(1) Reverse-server (192.168.0.1): Reverse proxy Server, with Nginx implementation. Provides 11000 ports externally, receives HTTP requests to Cache-server's 14000-port cache service, returns if any, and passes the request to the 12000 port of Store-server if there is none.

(2) Store-server (192.168.0.2): File storage server, implemented with Fastdfs. 12000 ports are available for download, receiving HTTP requests from Reverse-sever. 12500 ports are also available for uploading.

(3) Process-server (192.168.0.3): Business processing server, provides 13000 ports externally, receives the file sent by the client, and dumps file to Store-server via Store-server 12500 port.

(4) Cache-server (192.168.0.4): File cache server, implemented with memcached, provides 14000 ports externally. Receive read operations from Reverse-server, and write operations for Process-server.



Configuration Solution:

(1) For configuration and deployment of Fastdfs, see the Fastdfs related wiki on googlecode.

(2) Memcached deployment is simple, wget,tar,./configure,make,make install is OK.

(3) The process-server is realized by me, does not have the universality not to say.

(4) Reverse-server the Nginx configuration file in the HTTP module, establish a user to resolve the problem of the server, configured as follows:

  1. server {
  2. Listen 11000;
  3. Server_namelocalhost;
  4. default_typetext/html;
  5. Location/{
  6. Proxy_set_headerx-real-ip $remote _addr;
  7. Proxy_set_headerhost $http _host;
  8. Proxy_set_headerx-forwarded-for $proxy _add_x_forwarded_for;
  9. if ($request _method = POST) {
  10. Proxy_pass http://192.168.0.2:12000;
  11. Break
  12. }
  13. Set $memcached _key "$uri";
  14. Memcached_pass 192.168.0.4:14000;
  15. Error_page 501 404 502 =/fallback$uri;
  16. }
  17. location/fallback/{
  18. Internal
  19. Proxy_set_header X-real-ip $remote _addr;
  20. Proxy_set_header Host $http _host;
  21. Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
  22. Proxy_redirect off;
  23. Proxy_pass http://192.168.0.2:12000
  24. }
  25. }


Details

The Nginx memcached module only provides read operations to the memcached and does not provide write operations. If there is no corresponding key value in Memcached, which is missing, 404 is returned. If the request fails to find memcached Server, a 502 error is returned. Therefore, we can do this in such a way that when we return this type of error, we give the request to store-server. This allows the store-server to be accessed only when it is not hit, so the cache server can offload a lot of load to the storage server.



Second, the application scenario 2


In Scenario 1, the logic for the miss is swapped for write operations to memcached. The server System deployment scenario for this scenario is as follows:


(1) Reverse-server:client access to the server's 11000 port, the request is transferred to Cache-server 14000 port, if the cache-server hit is response Otherwise, give Process-server 13000 port for further processing.

(2) Data-server: 12000 ports are available for reading data.

(3) Process-server: Provides 13000 ports for receiving Reverse-server forwarded HTTP requests to Store-server 12000 Port query, The resulting value is combined with the key value from Reverse-server to form the k-v pair, which is written to Cache-server via Cache-server's 14000 port.

(4) Cache-server: provides 14000 ports for external read and write operations.


This is the Nginx memcached module configuration, to the request method for post and Error_log, no longer as in "Scenario 1" as in the move to Store-server, but all go to process-server. Because in Scenario 1, the operation to the cache is done in advance. The operation written to the cache in Scenario 2, however, occurs after the failed read cache. Note the business requirements that apply to both.


Research on the configuration and deployment of high performance Web server Nginx (11) Application module of the Memcached module two major scenarios

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.