API Gateway using Caddy as a micro-service

Source: Internet
Author: User
Tags docker run docker swarm
This is a creation in Article, where the information may have evolved or changed.

Background

As we all know, Docker has made a profound change in the IT world these years,
From development to testing to operations, there is a shadow of it everywhere.
It also facilitates the micro-service architecture and moves forward.

In the latest version of Docker (CE 17.03), with swarm mode the maturity,
In a simpler scenario, you can no longer need a special 基础设施管理
服务编排, 服务发现 , 健康检查 , and 负载均衡 so on.

But API gateway I still need one. Maybe one more 日志收集 ,
Your micro-service architecture is perfectly formed.
We know that we can do a Nginx Plus good job of API Gateway,
But it is commercial software. NginxWe do not say certification AH limit flow AH statistics AH such functions,
The basic problem is that the request is forwarded alone.

We know that Docker is a DNS way of balancing service requests of the same name to different node,
But nginx in order to speed, in the reverse proxy when there will be a non-canceled DNS Cache,
This way we docker dynamically update DNS based on the expansion or contraction of the container, but Nginx does not budge,
Insist on the request to a fixed IP, do not say the balance, the IP may even have expired it.

There is a configuration file on the small hack can be implemented nginx every time to query DNS, I was prepared to write an article,
Now it doesn't look like we've found a more elegant API gateway , Caddy.
My last article also wrote a brief introduction to it.

All the code that follows is in this demo,
You can clone down to play, but also on this basis to do their own experiments.

Application

Let's write the simplest HTTP API with Golang, which you can write in any language you speak,
It GET returns Hello world plus its own hostname for the request.

package mainimport (    "io"    "log"    "net/http"    "os")// HelloServer the web serverfunc HelloServer(w http.ResponseWriter, req *http.Request) {    hostname, _ := os.Hostname()    log.Println(hostname)    io.WriteString(w, "Hello, world! I am "+hostname+" :)\n")}func main() {    http.HandleFunc("/", HelloServer)    log.Fatal(http.ListenAndServe(":12345", nil))}

Docker

We need to make the above application a docker image, exposing the port 12345 .
Then it is possible to use Docker Swarm boot into a cluster.
It was really easy to do mirroring, but I made it a little faster to get everyone to pull the mirror directly, using a two-step build,
The application is compiled and then added to the smaller alpine image. You don't have to worry about these details.
Let's take a look at the final docker-compose.yml choreography first.

version: '3'services:    app:        image: muninn/caddy-microservice:app        deploy:            replicas: 3    gateway:        image: muninn/caddy-microservice:gateway        ports:            - 2015:2015        depends_on:            - app        deploy:            replicas: 1            placement:                constraints: [node.role == manager]

This is the latest version of the docker-compose file, no longer the docker-compose command to start, but to use the docker stack deploy command.
In short now this version in the layout has not been fully integrated, a bit dizzy, but can use. Now we see that there are two images in the orchestration:

    • Muninn/caddy-microservice:app This is what we said in the last section of the app image, we will launch 3 instances to test the upper load balancer.
    • Muninn/caddy-microservice:gateway This is our next gateway, it listens on port 2015 and forwards the request to the app.

Use Caddy as Gateway

To make Caddy a gateway, let's look at Caddyfile :

:2015 {    proxy / app:12345}

Well, it's too easy. It listens on the 2015 port of this machine and forwards all requests to app:12345.
This app, in fact, is a domain name, in the Docker swarm network, it will be parsed into a random instance of the name service.

If there are many apps in the future, it's good to forward different request prefixes to different apps.
So remember to write the specification when you make an app's endpoint prefix as much as possible.

Then caddy also need to be containerized, interested can see Dockerfile.gateway.

Running the service side

If you understand the above, you can start running the server. Just use the image I uploaded to the cloud. The three images used in this article are downloaded with a total of about 26M, not much.
Clone my Background section mentions the library into the project directory, or just copy the compose file mentioned above docker-compose.yml , and then execute the following command.

docker-compose pulldocker stack deploy -c docker-compose.yml caddy

Ah, by the way, the second stack command requires that you have already sliced Docker into the swarm mode, and if it does not automatically prompt, switch on the prompt.
If it succeeds, we check the status:

docker stack ps caddy

If it's no problem, we can see that 3 apps and one gateway have been launched. Then we'll test if the gateway can allocate the request to three backend.

Test

We can http://{your-host-ip}:2015 test whether the service is through access, using a browser or curl.
Then you'll see how the content is not changed, and it doesn't have access to the random backend as imagined.

Do not worry, this phenomenon is not because Caddy like Nginx cache DNS resulting in a balance failure, but another reason.
Caddy in order to reverse the proxy speed, will maintain a connection pool with the back end. When there is only one client, the first connection is always used.
To prove this, we need to access our services concurrently and see if we can meet our expectations.

In the same way, I have also prepared a mirror for everyone, which can be used directly from Docker.

docker run --rm -it muninn/caddy-microservice:client

Interested people can look at the code in the Client folder, which initiates 30 requests and prints out the number of 3 back-end hits.

In addition I made a shell version, only need to sh test.sh be able, but only look at the output pull, no automatic check results.

Well, now we know that Caddy can be a good fit for the API Gateway in the microservices architecture.

API Gateway

What the? You said you didn't see it as an API Gateway. We just solved the problem of API Gateway and DNS-type service discovery mates in the container project.
The next is simple, we write n apps, each app is a micro-service, in the gateway to the different URLs to different app is good ah.

Advanced

caddycan also easily by the way to the certification center, micro-service recommended to do a JWT certification, the right to carry in token, caddy a little configuration can be.
My follow-up will also give tutorials and demos. Auth2.0 I think it is not suitable for the microservices architecture, but there is still a complex architecture solution, this topic another day.

Caddy can also do the functions of API gateway such as API status monitoring , cache , current limit , but these need to be developed for you.
Do you have any more ideas? Welcome message.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.