Analyze the Architecture concept of surging and the architecture of surging

Source: Internet
Author: User
Tags rabbitmq eventbus

Analyze the Architecture concept of surging and the architecture of surging
1Preface

 

The first section describes a simple example of using the. net core microservice architecture to communicate between the application surging server and the client, as well as a brief introduction to the surging service framework. In this article, we will analyze the architecture of surging.

Download surging source code

2. Communication Mechanism 2.1 Overview

 In a single application, the call communication between modules is implemented by referencing the loading method or function, but the single application will eventually grow as the team grows, problems such as expansion and deployment of project modules that are difficult to maintain. With the rapid development and change of business requirements, the demand for agility, flexibility and scalability is constantly increasing, and a fast and efficient Software Delivery party is urgently needed.Type. Microservices can make up for the shortcomings of individual applications and are a fast and efficient software architecture style. A single application is divided into several smaller services. Each service has its own independent modules and is deployed separately to form an application. Limits the scope to a single independent business module. Distributed deployment on each server. Generally, each microservice is a process. The interaction between services must be achieved through inter-process communication (IPC ).

2.2 Interaction Mode

The interaction mode can be as follows:

• Request/response: the client initiates a request to the server to synchronously wait for the response. The waiting process may cause thread blocking.
• Notification (also known as one-way requests): client requests are sent to the server, and the server does not return a request response.
• Request/asynchronous response: the client sends a request to the server and the server responds to the request asynchronously. The client will not be blocked, and the default response will not arrive immediately.

• Publishing/subscription mode: the client publishes a notification message, which is consumed by zero or multiple subscribers.

• Publish/asynchronous response mode: the client publishes the request message, and then sends the response asynchronously or by the callback service.

Services can communicate with each other in synchronous request/response and request/asynchronous response modes. The surging framework adopts RPC-based netty request/asynchronous response and rabbitmq-based message communication modes. First, let's look at the asynchronous message communication mode.

2.1.1 Asynchronous Message Communication Mode

Surging uses the IPC process Communication for asynchronous message exchange based on Rabbitmq publishing and subscription. The client publishes a request through pub, and then the server performs Consume. The communication between clients is asynchronous and the client does not block the communication because of waiting.

A Message consists of a header (metadata) and a message body. The producer sends a message to the channel, and the consumer accepts the data through the channel. The channel is divided into point-to-point and publish subscription, point-to-Point channels accurately send messages to consumers in a one-to-one interaction mode. Publishing/subscription refers to the one-to-many interaction mode between PUB messages and all consumers who subscribe messages from the channel.

2.1.2 communication mode based on request/asynchronous response

Surging uses netty-based (IPC) process communication. It is a request/asynchronous response-based IPC Mechanism. The client sends a request to the server, and the server processes the request and responds asynchronously, the client does not block other requests because it waits for the service to return.

In the request/asynchronous response mode, the asynchronous response of the server can be executed in parallel or in a single process on a multi-processor system, this allows other threads to continue execution while a thread blocks the request. However, when accessing shared resources, you need to ensure thread security. You can use locks to handle thread security issues by first-in-first-out sets or other mechanisms.

3. Deployment and calling

1. Single application architecture

When the website traffic is low, you only need to deploy all the functions together to reduce the deployment nodes and costs.

Business processes in a single architecture are often processed within the same process without distributed collaboration. The working principle is as follows:

 


Figure 1-1 local method call of a single Architecture

 

2. vertical application architecture

As the access traffic increases, the pressure on a single architecture increases, and the architecture is split into unrelated applications to improve efficiency, MVC and webAPI are used for calling.

 

3. Distributed microservice Architecture

As there are more and more vertical applications, interactions between applications are inevitable. You can deploy independent business modules into independent microservices to gradually form a stable service center.

The Surging microservice adopts the distributed cluster deployment mode. service consumers and providers usually run in different processes, and the communication between processes is called by RPC. its working principle is as follows:

Figure 1-2 Surging distributed RPC call

The Surging microservice uses netty-based communication, and data transmission between data is serialized and deserialized JSON. Compared with local method calls, the following problems occur:

1. data serialization problem: the communication of microservice processes must be serialized and deserialized. Data Conversion fails due to inconsistent data structures, unsupported data types, and incorrect encoding, as a result, the call fails.

2. network problems: common problems include network timeout, transient network disconnection, and network congestion, which may cause remote calls of microservices to fail.

Each microservice is independently packaged and deployed to isolate processes between services. For large Internet projects, hundreds of microservices are usually not deployed completely, microservices of the same service are packaged and deployed together, and sensitive to latency. They are combined within the same process and called locally.

When different microservices are combined in the same process, the following problems occur:

3. overall architecture

The Code logic of the Surging framework is divided into eight layers. The design points of each layer are as follows:

  • Business module service interface layer (IModuleServices): This layer is related to the actual business logic. design the business module interface according to the service provider and service consumer.
  • Business module service layer (ModuleServices): This layer implements actual business logic through Domain and Repository
  • Basic communication platform (CPlatform): provides interfaces and basic implementations for basic data communication, such as basic logs, remote call services, Event bus, Server Load balancer algorithms, and data serialization.
  • DotNetty service layer (DotNetty): implements the sending and receiving of information based on the DotNetty service.
  • RabbitMQ service layer (EventBusRabbitMQ): encapsulates the Rabbitmq-based event bus for publishing and subscription.
  • Proxy service layer (ProxyGenerator): encapsulates the generation and creation of proxy services.
  • Service Registration layer (Zookeeper): encapsulates the registration and discovery of service addresses, uses Zookeeper as the service registration center, implements ServiceRouteManagerBase abstraction, and updates routes through heartbeat Detection
  • System service layer (system): encapsulation of underlying system interfaces
4. Distributed Reliability

The operating quality of Surging microservices is affected by other factors, including network, database access, and other associated microservices, the above factors need to be considered and summarized as follows:

4.1 asynchronous I/O operations

4.1.1 network I/O

1. synchronous blocking of I/o communication:

This is a typical request/response mode. The biggest problem with this model is the lack of auto scaling capabilities. When the number of concurrent access requests on the client increases, the number of threads on the server is proportional to the number of concurrent threads on the client. After the number of threads rapidly expands, the system performance will drop sharply. As the access volume continues to increase, the system will eventually crash.

Surging is a typical request/asynchronous response mode for non-blocking I/O communication based on Netty. This mode has the following advantages:

4.1.2 disk I/O

Microservice operations on disk I/O are mainly divided into synchronous file operations and asynchronous file operations,

In the Surging project, you need to obtain the route information from the Registration Center and cache it to the local machine. By creating a proxy, select the Router for the Server Load balancer algorithm, route information is cached and updated by heartbeat detection.

4.1.3 database operations

In general, file I/O, network access, and even synchronous communication between processes, and database access discussed in this section are time-consuming. ado.net, entity Framework and other ORM frameworks provide asynchronous execution methods.

4.2 fault isolation

Most microservices use synchronous interface calls, And microservices related to multiple fields are deployed in the same process, which is prone to an avalanche effect, that is, a failure of a microservice provider, as a result, the consumer who calls the microservice or other microservices that are in the same process as the faulty microservice may experience cascade faults, resulting in system crash. To avoid the "avalanche effect", you must support dependency and fault isolation in multiple dimensions,

4.1.1 communication link isolation

Network communication is not a bottleneck of the system. Therefore, most service frameworks use multithreading + a single communication link. The principles are as follows:

4.1.2 scheduling resource isolation

4.1.2.1 isolation between microservices

When multiple microservices run in the same process, you can use threads to isolate different microservices.

4.3 process-level Isolation

Core microservices, such as product user registration, billing, and orders, can be deployed independently to achieve high availability.

4.3.1 container isolation

Microservices decouple the entire project into independent business modules and deploy them into independent microservices. Using Docker containers to deploy microservices can be upgraded and resized, with the following advantages:

High Efficiency: when using Docker to deploy microservices, The microservice can be started and destroyed very quickly. at high pressure, the microservice can be elastically scaled in seconds.

High performance: the performance of Docker containers is close to the logic, 20% higher than that of VM

Isolation: high-density microservice deployment can be achieved. In addition, microservice isolation is implemented based on fine-grained resource isolation to ensure the reliability of microservices.

Portability: Package the running environment and applications together to solve the environment dependency problem of deployment.Cross-platform. It can be said that it is a write and runs everywhere.

4.3.2 VM isolation

In addition to Docker container isolation, you can also use VM to isolate microservices. Compared with Docker containers, using VM for microservice isolation has the following advantages:

1. The resource isolation of microservices is better. The CPU, memory, network, and so on can achieve full resource isolation.

2. For legacy systems that have completed hardware virtualization, you can directly use existing VMS without re-deploying Docker containers in VMS.

4.4 cluster Fault Tolerance

Omitted

5. Design of the Cache and EventBus middleware 5.1 Cache Middleware

Design goals:

The design is as follows:

Cache middleware uses consistent hash algorithms to implement distributed processing. The configured virtual nodes can be evenly distributed.

Currently, cache middleware only implements Redis and MemoryCache as the cache service. CacheBase should be implemented later

In the future, cache middleware will provide Configuration Services to facilitate the management of cache service configurations and display some status information.

5.2 EventBus middleware Design

Design goals:

The design is as follows:

1. publisher: The publisher publishes a message Event to the Topic through Event Bus.

2. Event bus: it is the Event bus. The publisher and Subscriber are decoupled to find the registered Event Subscriber. The message Event is sent to the topic.

3. Topic: message routing,The Subscriber receives message events from the online Subscriber in the form of broadcast.

4. Subscriber: A Subscriber that receives a message from the event bus. That is, the Handle method is executed. Note that the parameter type must be consistent with the parameter published by the publisher.

5. Summary

There are still many improvements to surging version 0.0.0.1, such as routing fault tolerance, service degradation, overall deployment lockdown, monitoring platform and configuration service platform, and subsequent integration of third-party middleware, all these tasks have been scheduled, and the stable version 1.0 will be released in the near future. If you are interested, join the QQ group: 615562965.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.