Use the memc-nginx-module to cache highly concurrent pages

Source: Internet
Author: User
The memc-nginx-module is applicable to some pages with high access volumes, especially systems with high instantaneous access Volumes. jetty cannot support requests, in this case, the memc-nginx-module and the srcache-nginx-module can be used to store the request page data in memcached, which can greatly improve the system's concurrency capability. Memc-nginx-module installation due to the use scenario of the angent memc-nginx-module

Jetty is unable to support requests for pages with large traffic volumes, especially systems with large instantaneous traffic volumes, in this case, you can use the memc-nginx-module and the srcache-nginx-module to store the request page data in
Memcached greatly improves the system's concurrency.

Memc-nginx-module installation

Since openresty maintained by angentzh encapsulates all these modules, it is the most convenient option to directly use openresty.
The installation process is as follows:

  • 1. download and decompress ngx_openresty
    wget "http://agentzh.org/misc/nginx/ngx_openresty-1.2.3.8.tar.gz"tar -xvzf ngx_openresty-1.2.3.8.tar.gzcd ngx_openresty-1.2.3.8
  • 2. install the libraries that openresty depends on before configure. the installation is as follows:
    yum install readline-devel pcre-devel openssl-devel
  • 3. Compile and install openresty. Note that the concat module must be added here.
    ./configure --add-module=/home/tar/nginx_concat_module/trunk/makemake install

Ngx_openresty is installed in the/usr/local/openresty directory by default.

Modify the nginx. conf file, configure the memc-nginx-module in the http segment, add memc_server, and enable keepalive
    upstream memc_server {        server 127.0.0.1:11211;        keepalive 512 ;    }
In the server segment, configure the page to be cached
    location ~ ^/qiang/[0-9]*\.html$ {        set $qiang_key $request_uri;        srcache_fetch GET /memc $qiang_key;        srcache_store PUT /memc $qiang_key;        add_header X-Cached-From $srcache_fetch_status;        proxy_redirect off;        proxy_set_header Host $host;        proxy_set_header Accept-Encoding "";        proxy_set_header X-Real-IP $remote_addr;        proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;        proxy_pass http://127.0.0.1:8080;    }
Location/memc {internal; memc_connect_timeout 100 ms; memc_send_timeout 100 ms; memc_read_timeout 100 ms; set $ memc_key $ query_string; # unified cache 5 s set $ memc_exptime 30; memc_pass memo_server ;}

Description
When reading the above Nginx cache page configuration, you must be clear that the successful execution of nginx requests is not only in the order of instructions in the configuration file. Each instruction is bound to a stage in the Ngnix request execution sequence. the execution sequence of the instruction configured above is roughly as follows:

srcache_fetch --> proxy_pass --> srcache_store
The preceding configuration must run on srcache-nginx-module 0.14 or later.
Srcache-nginx-module version for ngx_openresty-1.2.3.8 is 0.16
Restart the application and verify

Restart nginx and access the specified URL to verify that the memc-nginx-module takes effect. The verification method is as follows:
In the configuration file above

add_header X-Cached-From $srcache_fetch_status;

The command will output the cache hit status into an HTTP extension header. Therefore, you can use Safari, Chrome, Firefox, and other browsers to view the data returned by the http request to know whether the cache hits.
$ Srcache_fetch_status can be set to "HIT" or "MISS ".

Performance testing

Perform a stress test on the specified cache. the qps is up to 2300req/s. at this time, the system's cpu, memory, and other resources are still low, while the Gigabit Nic is full, the bottleneck is no longer on the cpu and memory, achieving the expected goal.

Update (2012-10-29 ):

1. in this solution, memcached and Nginx can be placed on the same server, and the data exchange between the two must be too large, leading to Nic bottleneck

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.