This would be great if you could reduce the plethora of externally isolated APIs and simplify deployment details.
In previous articles, I explained "some of the benefits of using reverse proxies". In my current project, we have built a distributed service-oriented architecture and explicitly provided an HTTP API that we use a reverse proxy to route request routing through the API to a single component. We chose the Nginx Web as our reverse proxy, which is fast, reliable and easy to configure. Through it, we aggregate multiple HTTP API services into a single URL space. For example, when you type:
Http://api.example.com/product/pinstripe_suit
It will be routed to:
Http://10.0.1.101:8001/product/pinstripe_suit
But when you visit:
http://api.example.com/customer/103474783
It will be routed to:
http://10.0.1.104:8003/customer/103474783
To the user, they feel that they are using the same URL space (Http://api.example.com/blah/blah), but the URLs of different top-level classifications on the backend are routed to different back-end servers. /prodect/... Route to 10.0.1.101:8001,/customer/... is routed to 10.0.1.104:8003. We also hope that this is automatic configuration. For example, I want to create a new component to record the inventory level. Rather than extending an existing component, I would prefer to be able to write another standalone executable or HTTP endpoint that provides the service, and then automatically deploy it as one of the hosts in my cloud infrastructure, and make Nginx automatically http://api.example.com/sock/ Whatever routing my new component. We also want to load balance back-end services, and we want to deploy Nginx automatic polling between multiple service instances of our stock new API.
We call each top-level category (/stock,/produck,/customer) a statement. A ' Addapiclaim ' was released via RABBITMQ when the component was launched. This message has 3 fields: ' declaration ', ' IP address ', ' Port address '. We have a special component of ' proxyautomation ' that receives these messages and rewrites the Nginx configuration items as required. It uses SSH and SCP to log on to the Nginx server, transmits various configuration files, and instructs Nginx to reload the configuration. We use the excellent SSH. NET libraries to automate them.
Nginx configuration is a very good thing to support wildcard characters. Take a look at our top-level configuration files:
Copy the Code code as follows:
...
HTTP {
Include/etc/nginx/mime.types;
Default_type Application/octet-stream;
Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '
' $status $body _bytes_sent ' $http _referer '
' "$http _user_agent" "$http _x_forwarded_for";
Access_log/var/log/nginx/access.log main;
Sendfile on;
Keepalive_timeout 65;
include/etc/nginx/conf.d/*.conf;
}
As described in line 16, all the. conf of the CONF.D directory are referenced here.
Within the CONF.D folder, there is a file that contains all the api.example.com request configurations:
Copy the Code code as follows:
include/etc/nginx/conf.d/api.example.com.conf.d/upstream.*.conf;
server {
Listen 80;
server_name api.example.com;
include/etc/nginx/conf.d/api.example.com.conf.d/location.*.conf;
Location/{
root/usr/share/nginx/api.example.com;
Index index.html index.htm;
}
}
This configuration allows Nginx to listen for api.example.com requests from Port 80.
Here are two parts. The first part is the first line, and I'll discuss it later. The 7th line describes the location.*.conf reference in the subdirectory "API.EXAMPLE.COM.CONF.D" to the configuration. The automation components of our proxy server add new components (AKA API claims) by introducing the new location.*.conf. For example, our stock component might create a location.stock.conf configuration file like this:
Copy the Code code as follows:
location/stock/{
Proxy_pass Http://stock;
}
It's just simple to tell Nginx to be all api.example.com/stock/... Proxy requests are forwarded to the back-end servers defined in ' stock ', which are stored in ' upstream.*.conf '. The Agent Automation component also introduces a file called upstream.stock.conf, which looks like this:
Copy the Code code as follows:
Upstream Stock {
Server 10.0.0.23:8001;
Server 10.0.0.23:8002;
}
These configurations tell Nginx to poll all requests to api.example.com/stock/to the given address, in this case two instances on the same machine (10.0.0.23), one on port 8001, and one on port 8002.
As with the deployment of stock components, adding new entries can also be added to upstream.stock.conf. Similarly, when a component is unloaded, it is possible to delete the corresponding entry, and when all components are removed, the entire file is deleted.
This infrastructure allows us to aggregate components of the underlying device. We can extend the application by simply adding a new instance of the component. As a component developer, there is no need to do any proxy configuration, just make sure that the component sends a message to add or remove API claims.