The memc-nginx-module is applicable to some pages with high access volumes, especially systems with high instantaneous access Volumes. jetty cannot support requests, in this case, the memc-nginx-module and the srcache-nginx-module can be used to store the request page data in memcached, which can greatly improve the system's concurrency capability. Memc-nginx-module installation due to the use scenario of the angent memc-nginx-module
Jetty is unable to support requests for pages with large traffic volumes, especially systems with large instantaneous traffic volumes, in this case, you can use the memc-nginx-module and the srcache-nginx-module to store the request page data in
Memcached greatly improves the system's concurrency.
Memc-nginx-module installation
Since openresty maintained by angentzh encapsulates all these modules, it is the most convenient option to directly use openresty.
The installation process is as follows:
Ngx_openresty is installed in the/usr/local/openresty directory by default.
Modify the nginx. conf file, configure the memc-nginx-module in the http segment, add memc_server, and enable keepalive
upstream memc_server { server 127.0.0.1:11211; keepalive 512 ; }
In the server segment, configure the page to be cached
location ~ ^/qiang/[0-9]*\.html$ { set $qiang_key $request_uri; srcache_fetch GET /memc $qiang_key; srcache_store PUT /memc $qiang_key; add_header X-Cached-From $srcache_fetch_status; proxy_redirect off; proxy_set_header Host $host; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; }
Location/memc {internal; memc_connect_timeout 100 ms; memc_send_timeout 100 ms; memc_read_timeout 100 ms; set $ memc_key $ query_string; # unified cache 5 s set $ memc_exptime 30; memc_pass memo_server ;}
Description
When reading the above Nginx cache page configuration, you must be clear that the successful execution of nginx requests is not only in the order of instructions in the configuration file. Each instruction is bound to a stage in the Ngnix request execution sequence. the execution sequence of the instruction configured above is roughly as follows:
srcache_fetch --> proxy_pass --> srcache_store
|
The preceding configuration must run on srcache-nginx-module 0.14 or later. Srcache-nginx-module version for ngx_openresty-1.2.3.8 is 0.16 |
Restart the application and verify
Restart nginx and access the specified URL to verify that the memc-nginx-module takes effect. The verification method is as follows:
In the configuration file above
add_header X-Cached-From $srcache_fetch_status;
The command will output the cache hit status into an HTTP extension header. Therefore, you can use Safari, Chrome, Firefox, and other browsers to view the data returned by the http request to know whether the cache hits.
$ Srcache_fetch_status can be set to "HIT" or "MISS ".
Performance testing
Perform a stress test on the specified cache. the qps is up to 2300req/s. at this time, the system's cpu, memory, and other resources are still low, while the Gigabit Nic is full, the bottleneck is no longer on the cpu and memory, achieving the expected goal.
Update (2012-10-29 ):
1. in this solution, memcached and Nginx can be placed on the same server, and the data exchange between the two must be too large, leading to Nic bottleneck