Build a distributed architecture with windows + nginx + iis + redis + Task. MainForm (nginx + iis build service cluster), nginxredis

Source: Internet
Author: User
Tags nginx server nginx reverse proxy redis cluster

Build a distributed architecture with windows + nginx + iis + redis + Task. MainForm (nginx + iis build service cluster), nginxredis

This article is about using windows + nginx + iis + redis + Task. mainForm builds a distributed architecture, and the title shows that this content is not completed by a sharing article. Therefore, I plan to share several articles to explain it and implement the distributed architecture step by step; the following describes the core nodes of the entire architecture. I hope you will like them a bit more:

 

. Architecture Design Diagram Display

. Nginx + iis build service cluster

. Redis stores distributed and shared sessions.

. Redis master-slave configuration and Sentinel management of multiple Redis Clusters

. Scheduled framework Task. MainForm provides data for redis Cluster Storage

 

The above is what I think is the core of the entire architecture. It does not contain Database Design (ignore it). The architecture design diagram is shown below:

The above is my personal opinion. Let's officially share today's article (Build a service cluster using nginx + iis):

. Summary of common nginx basic configurations

. Use nginx to build a static File Cache service

. Nginx + iis build service cluster

 

Let's share it one step at a time:

. Summary of common nginx basic configurations

First of all, we need to download the nginx service file from the Internet, the specific version of the windows system, please search online, I use the version here is the nginx-1.10.1; after downloading the directory structure is like this:

The configuration file we need to understand and operate on is nginx under the conf folder. conf file. Other files in this directory generally use the default one. Open the file without reading the # comment line. events node:

Events node:

Worker_connections: The default value is 1024, indicating that the maximum number of connections to the nginx service address is 1024;

Http Node:

Include: mime. types actually corresponds to the mime. types file in the nginx. conf directory at the same level, which contains the accessible mime type.

Default_type: application/octet-stream default type

Keepalive_timeout: Connection timeout, in seconds

Server node:

Listen: Port Number of the nginx listener

Server_name: Service name

Location: Route settings (Regular Expressions supported). The commonly used nodes include

Proxy_connect_timeout: timeout for nginx connection to the backend server (proxy connection timeout)

Proxy_pass: proxy address name

Proxy_set_header: set to allow the server to obtain the Real Ip address, port, etc. The corresponding values include Host, X-Real-IP, X-Forwarded-)

Upstream node:

Set the Server list of Server Load balancer

Set the proxy address name (corresponding to the above proxy_pass)

Set Server Load balancer allocation rules. Common rules include:

Round robin: one-by-one round robin (default)

Ip_hash: fixed access to a backend server after one access, which can solve the session problem.

Fair: requests are distributed based on the response time of the backend server.

Weight: weight. The larger the value, the more traffic.

Proxy_temp_path node:Temporary proxy folder path

Proxy_cache_path node:Proxy Cache folder path (cache files are all here)

The above information can be used to build a common Server Load balancer instance. For details about other nodes, refer to the official website.

 

. Use nginx to build a static File Cache service

Normally, some css, js, and image files in the distributed architecture are cached to provide efficient loading speed. A structural diagram published at the beginning of the article can be seen, to access the service cluster, user A needs to go through the nginx server for forwarding, so that only one jump is required to get the css static files, which is obviously slower than returning these files directly on the nginx server; in this case, you need to cache static resources to the nginx service. The following describes the configuration information required by the nginx configuration file:

# Server Load balancer server LIST upstream consumer {server 127.0.0.1: 4041; }## cache ## proxy_connect_timeout 5; proxy_read_timeout 60; proxy_send_timeout 5; proxy_buffer_size 16 k; proxy_buffers 4 64; limit 128 k; proxy_temp_file_write_size 128 k; proxy_temp_path D:/E/nginx-1.10.1/home/temp_dir; proxy_cache_path D:/E/nginx-1.10.1/home/cache levels = keys_zone = cache_one: 200 m inactive = 1d max_size = 30g; ## end ##

Note that the shenniu.test.com domain name after the upstream node will be used later. The server ip address in the node: Port: server 127.0.0.1: 4041 (this is the ip address + port of the real site project ), then you need to set the path to save the cache file: proxy_cache_path and proxy_temp_path

Then, the listen in the server node listens to port 3031, server_name: shenniu.test.com, and adds the static resource routing configuration.

Location ~ . *\. (Gif | jpg | png | css | js | flv | ico | swf )(. *) {# proxy_pass http://shenniu.file.com; proxy_pass http://shenniu.test.com; proxy_redirect off; proxy_set_header Host $ host; proxy_cache cache_one; Limit 200 302 1 h; limit 301 1d; proxy_cache_valid any 1 m; expires 30d; # cache duration, 30 days here}

Note that the value of proxy_pass in the reverse proxy is http: // + shenniu.test.com of the upstream node above, so the address of proxy_pass http://shenniu.test.com is the address of the access proxy; direct shenniu.test.com domain name, we need to find the host file in the local directory structure C: \ Windows \ System32 \ drivers \ etc, and add the same code as 127.0.0.1 shenniu.test.com, in this way, our domain name can be accessed in the browser of the local machine; Add the routing configuration of the page:

Location ~ . * (\/| \. (Html | htm ))(. *) {proxy_connect_timeout 90; # nginx and backend server connection timeout (proxy connection timeout) proxy_pass http://shenniu.test.com; proxy_redirect default; # The server obtains the real Ip address, port and other proxy_set_header Host $ host; proxy_set_header X-Real-IP $ remote_addr; proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;}

Then, we also need to use iis to publish a project named ShenNiu. Stage01. The corresponding ip address and port are the data in the above upstream node:

Then, use the ip address and domain name to access the browser:

Well, this is the access to the local host configuration domain name, But nginx is not used yet, because the nginx reverse proxy port configured is the listen listening port 3031 in the server node, so you should access the http://shenniu.test.com: 3031/user/login address, at this time in the browser is unable to access the port, also need to start nginx service:

Well, now use nginx to configure the reverse proxy and access the site program corresponding to the proxy for the first time, because the cache file address configured in D:/E/nginx-1.10.1/home/cache directory, view the folder:

Here is the location of the cache file. Here we will ask about the generated cache folder. Then we will visit the website for the second time in the browser, and F12 will view the corresponding js, css and other files:

At this time, the source Server of the file corresponds to the nginx service. Yes, the files cached by nginx are accessed now.

 

. Nginx + iis build service cluster

The above static cache service is basically involved in the nginx distribution function. Next we will quickly add some node information on the basis of the previous step to build a site service cluster. First, we modify the upstream node. The content is added as follows:

# Server Load balancer server LIST upstream shenniu.test.com {server 127.0.0.1: 4041; server 127.0.0.1: 4040 ;}

You only need to add this code because the page routing settings have been added during the above static file service (which can be viewed above). To demonstrate the distributed architecture, we also need to configure the number and ShenNiu in iis. stage01 (corresponding ip + port: 127.0.0.1: 4041) site ShenNiu. stage02 (corresponding ip address + port: 127.0.0.1: 4040), but the title on the login page is marked as "system 01 ", "System 02" to identify the site accessed. After the configuration is complete, reload the nginx Configuration:

Then, access the reverse proxy address http://shenniu.test.com: 3031/user/login to access the page to see the effect such:

At this time, access the same domain name. The first page title is "system 01" and the second is "system 02". We can see that the accessed site corresponds to 127.0.0.1 respectively: 4041 and 127.0.0.1: 4040, that is, the ShenNiu in iis. stage01 and ShenNiu. stage02, so that nginx can be used as a distribution site, and the site service cluster is created successfully.

The content of this article is just a simple cluster of nginx + iis, and will be explained in a later article.Redis stores distributed and shared sessions.Thank you for your support.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.