Chris Richardson microservices Translation: Building process communication for microservices architecture

Source: Internet
Author: User
Tags stomp

Chris Richardson Micro-Service series translation of all 7 links:

    • Micro Service Introduction
    • Building the Use API Gateway for microservices
    • Building process communication for microservices architecture (this article)
    • Service discovery in micro-service architecture
    • Event-driven data management for micro-services
    • Micro-Service Deployment
    • Reconstruction of monomer application for micro service

Original link:Building microservices:inter-process Communication in a microservices Architecture

Brief introduction

In a single application, the modules are invoked with each other using programming language-level methods or functions. The essence of a microservices-based architecture is a distributed application running on multiple machines, each of which is a process. As shown, interactions between microservices must be implemented using inter-process communication (IPC) Mechanisms:

We will discuss the IPC technology later, and look at the design-related issues first.

Interactive mode

When selecting an IPC mechanism for a service, first consider how the inter-service interaction occurs. There are many ways to interact with the client and server side, which can be categorized by two dimensions:

The first dimension is one-to-one or one-to-many:

    • Single: Each client request is processed by only one server
    • One-to-many: Each client request is processed by multiple servers

The second dimension is whether the interaction is synchronous or asynchronous:

    • Synchronous mode: Client expects a timely response from the server and may even be blocked by waiting
    • Asynchronous mode: Client waits for response without blocking and does not need to respond in a timely manner

The following table shows the differences between the two ways:

One One-to-many
Synchronous Request/Response
Asynchronous asynchronous Notice Publish/Subscribe
Request/Asynchronous response Publish/Async Response

Here are a couple of one-off interaction modes:

    • Request/Response: The client sends a request to the server and waits for a response, and the client expects the response to arrive in time. In a thread-based application, the requested thread may block execution of the thread while waiting.
    • Notification (one-way request): The client sends a request to the server, but does not expect a response.
    • Request/Asynchronous Response: The client sends a request to the server, and the server responds asynchronously. The client does not block because the default request is not returned immediately at design time.

There are several one-to-many interaction modes:

    • Publish/Subscribe Mode: Client publishes a notification message that is consumed by 0 or more of the services of interest.
    • Publish/Async Response mode: The client publishes a request message that waits for a response from the service of interest for a certain amount of time.

Each service is a combination of the above patterns, and for some services an IPC mechanism can be met, while others may require a combination of multiple IPC mechanisms. Shows how the service interacts when a user requests a trip in a user-called Car application:

The service uses a notification, request/response, publish/subscribe approach. For example: Passengers on the mobile side to the "Trip management Service" to send the transfer demand notification; The trip Management service uses the request/Response mode to invoke the passenger service to verify that the passenger account is valid; Then the trip management Service creates a trip and uses the Publish/subscribe model to notify other services (to locate the "dispatch service" of the available drivers , etc.).

We discussed the interaction style, and below we look at how to define the API.

Defining the API

An API is a contract between a server and a client. Regardless of which IPC mechanism you choose, you need to use the Interface Definition language (IDL) to define the API for the service. Before developing the service, define the service interface and review with the client side developers, and then iterate over the API later. This design can help you build services that are more customer-compatible.

In the second half of the article you will find that the API definition relies on the selected IPC mechanism. If you use a message mechanism, the API consists of a message channel and a message type. If you use HTTP, the API is made up of URLs and request/response formats. We'll discuss the details of IDL later.

API Evolution

The API of the service inevitably evolves over time. In a single application, you can modify the API directly and update all callers. However, in a microservices application, all callers of the instant API are in one application, and it is difficult to update other services, and it is often not mandatory for all client upgrades to remain consistent with the server side. In addition, you may also increase the deployment of a new service version, running concurrently with the old version. It is important to understand the strategies for dealing with these issues.

How do I handle the API based on the size of the change? Some changes are small and can often be backwards compatible with older versions, such as adding a property to a request or response. In this regard, it is necessary to design the service with robustness in mind: the client using the old version of the API works properly under the new version of the API, the server provides default values for the missing attributes, and the client ignores the extra attributes that are added in the response.

Sometimes the API has to do some big, incompatible changes, and at this time it is not mandatory for all clients to upgrade immediately, so the old version of the API will need to run for a while. If you are using an HTTP-based IPC, you can embed the service version in the URL, and each service instance can process multiple versions at the same time. Alternatively, you can choose to deploy separately for each version.

Handling local faults

There is a widespread problem with local failures in distributed systems, because the client and server are running in separate processes, and the server may be temporarily unavailable due to hangs or maintenance, cannot respond to client requests in a timely manner, or is slow to respond because of overloading.

As an example of the product detail page scenario mentioned in the above article, assuming that the referral service is not responding, the client may wait indefinitely for a service response to cause blocking, which not only results in a bad user experience, but also consumes valuable resources such as threads, as shown in the run-time thread exhaustion and unable to respond to any request:

In order to solve this kind of problem, the design need to consider the problem of local failure:

Netfilix offers a better solution:

    • Network timeout: Wait for the response without setting the duration of the blocking, and the use of a time-out policy to ensure that resources are not indefinitely occupied.
    • Limit requests: Sets the access limit for client requests for a service and, if the request reaches the upper limit, no longer processes any requests for rapid failure.
    • Fuse mode: Record the number of successful and failed requests, and if the failure rate exceeds one threshold, the trigger fuse causes the subsequent request to fail immediately. If a large number of requests fail, the service can be considered unavailable and the continuation of the request is meaningless. After some time, the client can retry again, and if successful, turn off the fuse.
    • Provides a fallback mechanism: provides fallback when a request fails, for example: return cache or a default value

Netflix Hystrix is an open source repository for relevant patterns. If the JVM is used, then Hystrix is recommended. If you are using a non-JVM environment, you can also use a similar library.

IPC Technology

There are now different IPC technologies to choose from: a request/response-based synchronous communication pattern, such as an HTTP-based Rest or Thrift, or an asynchronous, message-based communication pattern, such as AMQP, STOMP. These communications have different message formats, and the service can choose text-based, easy-to-read JSON or XML formats, or more efficient binary formats (such as Avro, Protocol buffers).

Asynchronous, message-based communication

When using message mode, the process communicates through asynchronous messages, the client sends a message to request the server, and if the server responds, the server sends another message to the client. Because the communication is asynchronous, the client is not blocked by waiting for a response, and the client is programmed with the service not responding immediately.

The message consists of the message header (metadata and sender) and the message body, the message is exchanged through the channel, any number of producers can send messages to the channel, similarly, any number of consumers can consume messages from the channel. There are two types of channels: Point-to-point, subscribe/Publish.

    • Point-to-point mode: Messages in a channel will only be delivered to a consumer, and this applies to one-to-two interactions mentioned earlier
    • Subscription/Release mode: Messages in the channel are delivered to all interested consumers, and this applies to a one-to-many interactive approach

Shows how to use the Publish/subscribe model in a taxi app:

The Trip Management Service writes the "Create itinerary" message to the subscription-release channel, notifying the dispatch service that there is a new trip request. The dispatch service finds idle drivers and notifies other services by writing a "recommended driver" message via the "Publish-subscribe" channel.

There are a variety of messaging systems for us to choose from, and of course we choose to support multiple programming languages whenever possible. Some messaging systems support standard protocols such as AMQP and STOMP, while others support proprietary protocols. Open source messaging systems such as: RabbitMQ, Apacha Kafka, Apache ActiveMQ, and NSQ. Together, they support a number of messages and channels dedicated to high availability, high performance, and high scalability.

There are many advantages to using the messaging system:

    • Client and server decoupling, the client only need to send the message to the appropriate channel, there is no need to perceive the existence of the server, so there is no need to use the service discovery mechanism to determine the location of the service instance.
    • Message buffering: Under the request/response protocol such as HTTP, both client and server need to be available during interaction. In message mode, however, the message component manages the message in a queue until the message is consumed by the consumer. For example, even if the order system is slow or unavailable, the online store can still accept the customer's order request, just put the next order message in the queue.
    • Flexible Client-server Interaction: Messages support all the interaction styles mentioned earlier.
    • Clear interprocess communication: RPC-based communication mechanism views make invoking remote services like local services, however, they are quite different because of the possible local failures. The message mechanism makes these differences intuitive and developers do not create a security illusion.

Of course, the messaging system also has drawbacks:

    • Additional operational complexity: the installation, deployment, operation and maintenance of message system components, and the high availability of the message system, otherwise it will affect the availability of the system.
    • The complexity of implementing the request/Response interaction pattern: Each request message needs to contain a reply channel ID and an association id,server send a response message containing the associated ID to the channel, and the client uses the correlation ID to match the corresponding response. In this case, it would be easier to use the IPC mechanism that supports request/response.
synchronous, request/Response IPC

When using the synchronous, request/response IPC, client requests for server may be blocked by waiting for the server response. Other clients use asynchronous, event-driven code, such as packaged future or Rx Observable. The most common protocol for this pattern is Rest and thrift.

Rest

The current popular development of RESTful style APIs. Rest is an HTTP-based IPC mechanism whose core concept is to use URLs to represent resources (a set of business objects for a user or product). For example, a GET request returns information about a resource, possibly an XML document or JSON object format, a POST request creates a new resource, and a PUT request updates a resource. The father of REST Roy Fielding once said:

REST provides a set of architectural constraints, when applied as a whole, emphasizes scalability of component Intera Ctions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction Latency, enforce security, and encapsulate legacy systems.

Rest provides a number of column-schema system parameters as a whole, emphasizing the extensibility of component interactions, the commonality of interfaces, the independent deployment of components, and the middleware that reduces interaction latency, and he strengthens security and encapsulates legacy systems.

The following shows the use of the Rest of the taxi software scenario:

The passenger sends a POST request to the/trips resource of the trip Management service, and the trip management service then sends a GET request to the passenger management Service to obtain the passenger information, when the passenger authentication is complete, creates a trip, and returns a 201 response.

Leonard Richardson defines a maturity model for REST, divided into the following four levels:

    • The level 0:web service uses HTTP as a transport, calling a fixed URL, specifying methods and parameters each time it is requested
    • Level 1: Introduces the concept of resources, to perform operations on resources, request through POST, specify the action and parameters to be performed
    • Level 2: Use HTTP syntax to perform operations, for example: Get for GET, POST for Create, PUT for update
    • Level 3:API definition follows HATEOAS (hypertext as the Engine of the application state) design principle, the basic idea GET request returns some links to the resource allow operation. For example, a client cancels an order by using a link contained in a GET order resource. One advantage of HATEOAS is that there is no need to write a hard-link URL in the client code. In addition, the resource information returned contains a link to the resource allow operation, and the client no longer has to guess what can be done under the current resource.

Advantages of HTTP-based protocols:

    • Simple, familiar to everyone
    • You can test the API using a command line such as browser, Postman,curl, etc.
    • Support for request/Response mode communication
    • No intermediate agents, reduced system architecture

HTTP deficiencies:

    • Only request/response interaction is supported
    • There is no message buffering mechanism between the client and the server, which requires both parties to run simultaneously when interacting
    • Client needs to know the URL of each server instance
Thrift

Apache Thrift is an interesting alternative to REST, implements a cross-lingual client and server-side RPC communication framework, Thrift provides a C language-style interface Definition language to define the API, which can be compiled to generate client stubs and service-side skeletons, You can generate code in multiple languages (including C + +, Java, Python, PHP, Ruby, Erlang, node. js).

The Thrift interface typically contains one or more services, which, like the Java interface, are a set of strongly typed methods. Thrift can return a value, or it can be defined as one-way communication. If a return value is required to implement a request/response style interaction, the client can throw an exception while waiting for a response, and one-way communication is the notification mode, and the server does not need to return a response.

Thrift supports different message formats such as JSON, binary, compressed binary, and so on. Binary decoding is faster and more efficient than JSON, and compression binary is more effective than JSON space, and JSON is easier to read. Thrift also supports different communication protocols: TCP or HTTP,TCP are more efficient than HTTP, and HTTP is more friendly to firewalls, people, and browsers.

Message format

Choosing a message format that supports multiple languages is very important, even if you only implement microservices in one language, who can guarantee that new languages will not be used in the future?

There are currently two formats for text and binary. The text format includes JSON and XML. The advantages of this format are not only readable, but also self-descriptive. In JSON, an object's properties are a collection of key-value pairs, and in XML, attributes are represented as named elements and values. Consumers can choose values of interest and ignore other parts, and the formatting changes can be easily backwards compatible.

The structure of the XML document is defined by the XML schema, and as time progresses, the developer realizes that JSON also needs a similar mechanism, either using the JSON Schema or using it independently or as part of an IDL such as Swagger.

One of the big drawbacks of text formatting is that messages can become verbose, especially XML: Because the messages are self-describing, each message includes the name of the property in addition to the value. Another big drawback is that parsing text is slightly more expensive, and you can consider a binary format at this point.

Binary format is also a lot, if you use Thrift, then you can use binary Thrift; If you use other message formats, the common ones include Protocol buffers and Apache Avro, both of which provide IDL to define the message structure. The difference is that Protocol buffers uses tag fields, and Avro consumers need to understand the Schema to parse messages, and when using Protocol buffers, API evolution is easier than Avro. Martin Kleppmann's blog post provides a detailed comparison of thrift, Protocol buffers and avor.

Summarize

MicroServices need to interact with interprocess messaging mechanisms, and when designing a service's communication pattern, there are several issues to consider: how the service interacts, how to define the API, how to upgrade the API, and how to handle local failures. The microservices architecture has two IPC mechanisms available: The asynchronous message mechanism and the synchronous request/response mechanism. In the next article, we will discuss service discovery issues in the microservices architecture.

Chris Richardson microservices Translation: Building process communication for microservices architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.