Windows+nginx+iis+redis+task.mainform Building a distributed architecture (Nginx+iis build Service cluster)

Source: Internet
Author: User
Tags nginx server nginx reverse proxy redis cluster

The main share of this is the use of windows+nginx+iis+redis+ Task.mainform set up a distributed architecture, by the title can see that this content is not a share of the article can be said, so I intend to share a few articles to explain, step by step to achieve the distributed architecture; The following will give a brief introduction of the core node of the whole architecture, and I hope you'll have a lot of praise:

. Architectural design Diagram Show

. Nginx+iis Building a service cluster

. Redis storage distributed sharing session and shared session operation Flow

. Redis master-slave configuration and Sentinel management for multiple Redis clusters

. Timed frame Task.mainform provides data to Redis cluster storage

The above is the part of the architecture I think is the core, which does not include the design of the database (please ignore), the following first send a schema design diagram:

The above is a personal view, the following to formally share today's article (Nginx+iis build Service cluster ):

. Nginx Common Basic Configuration summary

. Using Nginx to build a static file caching service

. Nginx+iis Building a service cluster

The following step by step to share:

. Nginx Common Basic Configuration summary

First of all, we need to download the Nginx service file from the Internet, the specific Windows system use what version please search online, I use the version is nginx-1.10.1; After downloading the directory structure is like this:

We need to understand and manipulate the configuration file is the nginx.conf file under the Conf folder, the other files in this directory generally take the default line; Open the file without looking at the line of the # comment; Events node:

Events node:

Worker_connections: Default value 1024, representing the maximum number of connections for Nginx service address 1024;

http node :

Include:mime.types actually corresponds to the mime.types file in the nginx.conf sibling directory, which is the MIME type that can be accessed

Default_type:application/octet-stream Default Type

Keepalive_timeout: Connection time-out, per second

Server node:

Listen:nginx Port number for monitoring

SERVER_NAME: Service Name

Location: Routing settings (regular expressions are supported), where the commonly used nodes are

Proxy_connect_timeout:nginx with back-end server connection time-out (proxy connection timed out)

Proxy_pass: Proxy address name

Proxy_set_header: Set the server to get the real IP, port, etc., corresponding value (host,x-real-ip,x-forwarded-for)

Upstream node:

Set the server list for load balancing

Set the proxy address name (corresponding to the proxy_pass above)

Set up load Balancer allocation rules, common rules are:

Polling: round-robin access (default)

Ip_hash: Fixed access to a back-end server after access once, can solve the problem of the session

Fair: Response time for back-end servers to allocate requests, short response times for priority allocation

Weight: weight, the greater the value the more traffic

proxy_temp_path node: Proxy Temp Folder path

proxy_cache_path node: Proxy Cache folder path (cache files are here)

The above information is basically able to complete a load balancer commonly used to build, and other more detailed node please refer to the official website

. Using Nginx to build a static file caching service

Usually some css,js of the distributed architecture, picture files are cached, so as to provide efficient loading speed; A schema diagram published at the beginning of the article can be seen that user A to real access to the service cluster needs to go through the Nginx server forwarding, This needs to jump once to get to the CSS static file is significantly slower than directly on the Nginx server to return these files, so in this case there is the need to cache the static resources to the Nginx service; Let's take a look at the configuration information required by the Nginx configuration file:

#负载均衡的服务器列表 upstream shenniu.test.com{server127.0.0.1:4041; } # #cache # # Proxy_connect_timeout5; Proxy_read_timeout -; Proxy_send_timeout5;    Proxy_buffer_size 16k; Proxy_buffers464k;    Proxy_busy_buffers_size 128k;    Proxy_temp_file_write_size 128k; Proxy_temp_path D:/e/nginx-1.10.1/home/Temp_dir; Proxy_cache_path D:/e/nginx-1.10.1/home/cache levels=1:2keys_zone=cache_one:200m inactive=1d max_size=30g; # #end # #

Note that the shenniu.test.com domain name behind the upstream node will be used later, the server in the node corresponds to the IP: port such as: Server 127.0.0.1:4041 (This is the ip+ port of the real site project), Then you need to set the path to save the cache file: Proxy_cache_path and Proxy_temp_path

Then the server node inside listen listens for 3031 ports, server_name:shenniu.test.com, increases the static resource routing configuration

 location ~. *\. (gif|jpg|png|css|js|flv|ico|swf) (. *) {#proxy_pass http:  // shenniu.file.com;    Proxy_pass http:// shenniu.test.com;               proxy_redirect off;             Proxy_set_header Host $host;             Proxy_cache Cache_one; Proxy_cache_valid  200  302   1h;             Proxy_cache_valid  301   1d;             Proxy_cache_valid any 1m;    Expires 30d; #缓存时长, this is 30 days}  

note inside the proxy_pass corresponding value of the reverse proxy http://+ above the upstream node, so that is shenniu.test.com http ://shenniu.test.com This address is the address of the access agent, directly shenniu.test.com the domain name, we need to find the host file in this directory structure C:\Windows\System32\drivers\etc of this machine, Then add the code such as: 127.0.0.1 shenniu.test.com, so that our domain name can be accessed in the browser of the local computer, add the page routing configuration:

Location ~. * (\/|\. ( html|htm)) (. *) {            ;  #nginx跟后端服务器连接超时时间 (proxy connection Timeout)            proxy_pass http://shenniu.test.com;            default ;                        #服务端获取真实的Ip, ports and other            proxy_set_header   Host             $host;             Proxy_set_header   X-real-IP        $remote _addr;             Proxy_set_header   X-forwarded-for  $proxy _add_x_forwarded_for;         }

Then we also need to publish a project with IIS named Shenniu.stage01, with the corresponding IP and port being the data in the upstream node above:

Then, in the browser access effect with IP and domain name respectively:

Ok this is the local host configuration domain name access, but this has not been used to nginx, because the configuration of the Nginx reverse proxy port is the server node inside the Listen listening 3031 port, so should be access to the http://shenniu.test.com : 3031/user/login address, at this time in the browser is not access to the port, but also need to start the Nginx service:

OK, now use Nginx to configure the reverse proxy and the first time to access the agent corresponding to the site program, because we configure the cache file address in the D:/e/nginx-1.10.1/home/cache directory, so look at the folder:

This is where the cache files are located, where to ask the generated cache folder; Then we visit the website in the browser for the second time, and F12 view the corresponding js,css and other files:

At this point the source server for the file is the Nginx service, yes, it's the Nginx cache file now.

. Nginx+iis Building a service cluster

The above static caching service basically involves the use of nginx to do the functions of distribution, the following we have to quickly add some node information on the basis of the earlier, set up a site service cluster; First, we modify the upstream node to add content information such as:

#负载均衡的服务器列表    upstream shenniu.test.com{            127.0.  0.1:4041;         127.0. 0.1:4040;    }

Only add this code, because the static file service above has increased the page routing settings (can look up), in order to demonstrate the distributed architecture, we also need in IIS in the configuration of the SHENNIU.STAGE01 (corresponding ip+ port: 127.0.0.1 : 4041) Site-like program site SHENNIU.STAGE02 (corresponding ip+ port: 127.0.0.1:4040), but the title of the landing page is labeled "System 01", "System 02" to distinguish between the access to the site, After configuration, let's reload the Nginx configuration:

Then, access the reverse proxy address http://shenniu.test.com:3031/user/login The following page to see the effects such as:

At this time to access the same domain name, the first page of the title is "System 01", the second is "System 02", you can see that the site of the visit corresponds to 127.0.0.1:4041 and 127.0.0.1 : 4040, that is, I configured IIS shenniu.stage01 and SHENNIU.STAGE02, so that the Nginx distribution site is successful, the site service cluster was created successfully.

This share of the content is only nginx+iis to do a simple cluster, the next share of the article will explain the Redis storage distributed sharing session and shared session operation Process , please look forward to thank you for your support and praise.

Windows+nginx+iis+redis+task.mainform Building a distributed architecture (Nginx+iis build Service cluster)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.