Gateway-kong under Aspnetcore microservices (i)

Source: Internet
Author: User
Tags cassandra docker run

Kong is the Mashape open source high performance High Availability API gateway and API Service Management layer. It is based on Openresty, implements API management, and provides AOP for plug-in implementation APIs. Kong manages more than 15,000 APIs in Mashape and provides 200,000 developers with billions of per month of requests for support. This article will introduce Kong from three levels of architecture, API management, and plug-ins.

Architecture

In accordance with Conway's Law, our system architecture will be dismantled, and the system consists of a bunch of services, as shown in:

Stock Service, coupon service, price service before you do some special processing, such as current limit, black and white list, log, request statistics. And these processes are almost all the services need, this is not what we often say AOP, when we have more services, we should focus these general processing in one place to manage, as shown in:

And a bit similar:

1. Why use Kong as an API gateway under Netcore?

1. Open source, Yunsheng (cloud-native), ServiceMesh, fast, resilient, restful, and abstract layers of distributed microservices
2. Built-in gateway based on Nginx, with higher performance and 2015 open source
3. Active community, there are 111 contributors on GitHub, fix bugs quickly, basically every 3 months a version
4. Support Plug-in, currently supports 32 plug-ins, including authorization, security, current limit, serverless, analysis and monitoring, conversion, log.
5. Support for Enterprise and Community editions

Schema preview based on Openresty (Nginx & Lua Scripting)

Clearly see the structure diagram of Kong, with Nginx as the foundation, openresty build restful, support the cluster and database storage data, plug-in, and support to manage the end with restful.

Cluster Schema Preview


Here is the theory of the cluster of Kong, Kong in 0.11.0 before the use of serf to do cluster, then why not serf do cluster it? The developer gives the following reasons:
1. Reliance on Serf,serf does not belong to Nginx/openresty
2. This mechanism, which relies on mutual communication to synchronize, is somewhat inconvenient for deployment and containerized.
3. Triggering serf on a running Kong node requires some I/O for the lease
0.11.0 version of the implementation of the idea is a database-centric, add a cluster events table, any Kong node can send a change message to the database, the other nodes rotation database changes, and then update the cache content, if a node restart connected to the database node can work.

Installation of Kong

The installation of Kong supports many mainstream platforms and currently does not support Windows, supported by the following installation methods:

The installation of Kong, in order to facilitate my use of Docker installed here
1. Create a network of dedicated Kong (Docker best Practices)--link out of date.
docker network create kong-net
2. Select the database you are using and the default is PostgreSQL
If you are using the Cassandra database:
Hint under: Cassandra >=3.0
docker run -d --name kong-database \ --network=kong-net \ -p 9042:9042 \ cassandra:3
If you're using PostgreSQL,
docker run -d --name kong-database \ --network=kong-net \ -p 5432:5432 \ -e "POSTGRES_USER=kong" \ -e "POSTGRES_DB=kong" \ postgres:9.6
3. Database migration, initializing the structure of the library table:

 docker run --rm     --network=kong-net     -e "KONG_DATABASE=postgres"     -e "KONG_PG_HOST=kong-database"     -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database"     kong:latest kong migrations up

4. Start Kong

docker run -d --name kong     --network=kong-net     -e "KONG_DATABASE=postgres"     -e "KONG_PG_HOST=kong-database"     -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database"     -e "KONG_PROXY_ACCESS_LOG=/dev/stdout"     -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout"     -e "KONG_PROXY_ERROR_LOG=/dev/stderr"     -e "KONG_ADMIN_ERROR_LOG=/dev/stderr"     -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl"     -p 8000:8000     -p 8443:8443     -p 8001:8001     -p 8444:8444     kong:latest

5. See if the gateway is started
In the native Curl-i http://localhost:8001/, or with a browser access to port 8001. If a lot of JSON is coming out, it means success.

Take Aspnetcore as an example to access
mkdir AspNetCorecd AspNetCoredotnet new webapidotnet run

We use the Netcore API as an example to access the Localhost:5001/api/values, the front gateway is set up, and support restful, now have open source dashboard, we have kongdashboard to demonstrate, How to construct and access the building.

# 全局安装kong-dashboardnpm install -g kong-dashboard# 启动 kong-dashboardkong-dashboard start --kong-url http://localhost:8001# 启动kong-dashboard,并且自定义端口kong-dashboard start   --kong-url http://kong:8001   --port [port]# 启动kong-dashboard并且启动基础认证kong-dashboard start   --kong-url http://kong:8001   --basic-auth user1=password1 user2=password2# 看kong-dashboard 启动参数kong-dashboard start --help

After successful startup, open localhost:8080 with the browser as shown:

Then we add a netcoreapi, in dashboard,:

Because it is a GET request, we use a browser---gateway---Netcore program.
Open the browser to access Http://localhost:8000/api/values directly, return ["Value1", "value2"] is normal.
As shown in the following:

Finally, Aspnetcore micro-service Gateway-kong series, the following will continue to update, will explain the use of Kong's plug-ins, plug-in development, use of some pits, gateway performance analysis and log visualization, source resolution, etc., Welcome to my github:https:// Github.com/withlin.

Gateway-kong under Aspnetcore microservices (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.