The configuration method of the reverse proxy of Automation Nginx server

Source: Internet
Author: User
Tags ssh nginx server

This article mainly introduced the automatic Nginx server's reverse proxy configuration method, the reverse proxy is the Nginx server's signboard function, needs the friend to be possible to refer to under

This would be great if you could reduce excessive external isolation APIs and simplify deployment details.

In previous articles, I explained "some of the benefits of using reverse proxies". In my current project, we have built a distributed service-oriented architecture and also explicitly provided an HTTP API that we use reverse proxies to route request routing through the API to individual components. We chose the Nginx Web as our reverse proxy, which is fast, reliable, and easy to configure. Through it, we aggregate multiple HTTP API services into a URL space. For example, when you type:

Http://api.example.com/product/pinstripe_suit

It will be routed to:

Http://10.0.1.101:8001/product/pinstripe_suit

But when you visit:

http://api.example.com/customer/103474783

It will be routed to:

http://10.0.1.104:8003/customer/103474783

To the users, they feel that they are using the same URL space (Http://api.example.com/blah/blah), but the URLs of different top-level classifications at the back end are routed to different back-end servers. /prodect/... Route to 10.0.1.101:8001,/customer/... is routed to 10.0.1.104:8003. We also hope that this is automatic configuration. For example, I want to create a new component to record the level of inventory. Rather than extending existing components, I would prefer to be able to write another standalone executable or provide a service HTTP endpoint, and then automatically deploy to be one of the hosts in my cloud computing infrastructure, and make Nginx automatically http://api.example.com/sock/ Whatever route my new component. We also want to load balance back-end services, and we also want to deploy Nginx automatic polling between multiple service instances of our stock new API.

We call each top-level category (/stock,/produck,/customer) as a declaration. A ' Addapiclaim ' was released through RABBITMQ when the component was online. This message has 3 fields: ' declaration ', ' IP address ', ' Port address '. We have a special component of ' proxyautomation ' that receives these messages and overrides Nginx configuration items as required. It logs on to the Nginx server using SSH and SCP, transmits various configuration files, and instructs Nginx to reload the configuration. We use the excellent SSH. NET library for automation.

The Nginx configuration is a very good thing to support wildcard characters. Take a look at our top-level profiles:

The code is as follows:

...

HTTP {

Include/etc/nginx/mime.types;

Default_type Application/octet-stream;

Log_format Main ' $remote _addr-$remote _user [$time _local] ' $request '

' $status $body _bytes_sent ' $http _referer '

' $http _user_agent ', ' $http _x_forwarded_for ';

Access_log/var/log/nginx/access.log main;

Sendfile on;

Keepalive_timeout 65;

include/etc/nginx/conf.d/*.conf;

}

As described in line 16, refer to all. conf for the CONF.D directory here.

Within the CONF.D folder, a file contains all the api.example.com request configurations:

The code is as follows:

include/etc/nginx/conf.d/api.example.com.conf.d/upstream.*.conf;

server {

Listen 80;

server_name api.example.com;

include/etc/nginx/conf.d/api.example.com.conf.d/location.*.conf;

Location/{

root/usr/share/nginx/api.example.com;

Index index.html index.htm;

}

}

This configuration lets Nginx listen for requests from api.example.com on port 80.

This includes two parts. The first section is the first line, which I will discuss later. Line 7th describes the reference to the location.*.conf in the subdirectory "API.EXAMPLE.COM.CONF.D" to the configuration. The automation component of our proxy server adds new components (AKA API claims) by introducing new location.*.conf. For example, our stock component might create a location.stock.conf configuration file, like this:

The code is as follows:

location/stock/{

Proxy_pass Http://stock;

}

This is simply to tell Nginx to be all api.example.com/stock/... Proxy requests are forwarded to the back-end servers defined in ' stock ', which are stored in ' upstream.*.conf '. The Agent Automation component also introduces a file called upstream.stock.conf, which looks like this:

The code is as follows:

Upstream Stock {

Server 10.0.0.23:8001;

Server 10.0.0.23:8002;

}

These configurations tell Nginx to poll all requests to api.example.com/stock/to the given address, in which case two instances are on the same machine (10.0.0.23), one on 8001, and one at 8002 ports.

As with the stock component deployment, adding new entries can also be added to the upstream.stock.conf. Similarly, when a component is unloaded, it is OK to delete the corresponding entry, and when all components are removed, the entire file is deleted.

This infrastructure allows us to aggregate the infrastructure of the components. We can extend the application by simply adding a new component instance. As a component developer, you do not need to do any proxy configuration, just make sure that the component sends a message to add or remove API declarations.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.