Simplified Client for message bus refactoring

Source: Internet
Author: User
Tags server memory

This period of time the message bus was re-refactored. This refactoring mainly focuses on the components of the message bus and the simplification of the pubsub client, as well as some ideas about the message bus.

Simplifying the complexity of the client

The previous client needs to connect two distributed components at the same time. Access to the message bus requires the user to provide the pubsuberHost pubsuberPort parameters, so it first connects to Pubsuber. The message bus is built on RABBITMQ, so it must also connect RABBITMQ. There is no need for the user program to provide RABBITMQ server address information, because it is obtained indirectly through Pubsuber.

The idea at the time was to keep the user from facing the MQ Server in terms of security, and MQ's choice could theoretically be multiple, which was transparent to the user. But as a back-end component, the security issue is not the most important goal, and the cost of replacing MQ is tantamount to rewriting the message bus, which is unlikely. In addition, this also brings additional complexity and high failure rates (when one of the Pubsuber and RABBITMQ fail, the message bus will fall into chaos), combined with a long connection scenario (e.g., push mode consume) in the message bus, Once a component fails, it can cause a restart of the client program (in order to reinitialize the connection).

Having the message bus client connect to only a single RABBITMQ component can greatly reduce the probability of failure, and the failure retry mechanism provided by the RABBITMQ official client can be better.

Obtaining authorization information with RPC

Because the previous Pubsuber part assumed the role of the authorization information data source and removed the previous Pubsuber component, it was necessary to redesign the scheme for obtaining remote authorization information. Because RABBITMQ provides a lightweight, JSON-based RPC mechanism, we can get authorization information from the backend via RPC and let the backend interact with Pubsuber. There was this idea before, and later when using HBase, it was found that its Java client was also interacting with the master node through RPC, and this time it was determined in this way. In fact, the form of RPC can greatly simplify the logical implementation of the client, but also greatly reduce the cost of upgrading.

Modifying the implementation of broadcast and PUBSUB

Before Pubsuber in the client also played two roles: 实现广播机制 and 实现实时控制 . Therefore, if you want to remove Pubsuber from the client, you need to re-implement these two functions. That is, to find another mechanism to support real-time push , considering the fact that RABBITMQ itself can achieve long-connected instant consumption function, where the choice is directly based on the RABBITMQ itself to achieve.

We have created a new internal exchange for message routing. Unlike other topic types of exchange, we use an uncommon type of headers Exchange.

Header type: Routing based on message headers containing special key-value pairs

Considering that we need to re-implement the above two functions, we divide the message into two categories: event and notice .

    • Event: Internal Control Message
    • NOTICE: Broadcast Messages

When you send these two messages, you only need to specify the key-value of the corresponding header in the message header to enable automatic routing. These two message types correspond to the types of queues that are bound to inner exchange.

How to save resources for RABBITMQ server

Given that almost every client has the need to accept both types of messages, and it's a bit too wasteful to create two queues for each of our clients under that Exchange, the best practice is to build two temporary queues within the session cycle that the client uses, The queue can be destroyed immediately after the client uses it to reclaim resources.

Thanks to the rich nature of RABBITMQ, we can easily do this. When we instantiate the client, we create a 临时且排他 queue of two internally. The so-called 临时且排他 special queue, which is only visible to the connection that created it, when the connection to create it is broken or the number of consumers drops from more than 0 to zero, the queue is deleted, and the queue with this attribute is created for almost one session.

Temporary and exclusive properties are implemented by specifying the queue and the property as true when the queue is created auto-delete exclusive .

After these two queues are created, the current client is immediately mounted as a consumer waiting on the queue event and notice .

The sender does not need to know the exact name of the two queues above, it only needs to be aware of proxy exchange and inner Exchange Routing-key, and then specify whether or not to send in the message header of the sending message event notice .

Code snippet:

Innerevententity evententity =NewInnerevententity ();                Evententity.setidentifier (channel); Evententity.setvalue (NewString (data)); Evententity.settype ("Event");                String jsonobjstr = Gson.tojson (evententity);                Message eventmsg = Messagefactory.createmessage (messagetype.queuemessage); map<string, object> map =NewHashmap<string, Object> (1); Map.put ("Type","Event");                Eventmsg.setheaders (map);                Eventmsg.setcontent (Jsonobjstr.getbytes ()); Amqp.                Basicproperties properties = Messageheadertransfer.box (eventmsg);                                      Proxyproducer.produce (Constants.proxy_exchange_name, Mqchannel,                                      Event_routing_key_name, Eventmsg.getcontent (), properties);
Why encapsulation is needed to remove Pubsuber encapsulation

Before I get rid of it, I want to talk about why it was encapsulated. When I originally encapsulated the message bus, I knew about Redis and zookeeper, and they all had some common features, such as:

    • can access a small amount of data in the form of Key-value
    • can provide real-time push change functionality for Pub/sub

This is required by the pubsuber of the message bus client, but in order to provide optional, I have a layer of encapsulation on both of these features, which allows this configuration change component to be adapted regardless of which one is chosen, without modifying the code. This is the original purpose of encapsulation.

Why to remove the package

First, the removal package is returned to the zookeeper and excludes Redis. In addition to discovering that too much open source software is using zookeeper to fulfill this need, in addition to discovering that this is zookeeper's specialty, Redis only provides these features. In addition to these, the most critical issue is that I find that when it comes to 命名服务 features, Redis will become no longer appropriate.

In a distributed service, there is a good chance that there will be more than one component, and that there are some logical relationships between these components and the application, not simply flat relationships. In many cases, we need to build some relationships into a tree-like structure. For example, now that the message bus has only become a component in the platform, we need to embody this relationship on the configuration, so it may be modified from the original flat relationship to such a form:

In this tree-like structure similar to the file system, 获取子节点的变更事件 Redis will be powerless to implement such a linkage behavior. This is because the pubsub feature of Redis is only available on the Key-value (String) type. In other words, its value can only be a first-level relationship. Of course, in order to represent multi-layered relationships, you can use "." In key. Differentiate, such as "App1", "App1.message", although you can know the relationship between them, but they are technically the same, can not produce linkage change function. So in some scenarios, zookeeper is irreplaceable.

Some considerations for topology tradeoffs

The original goal of the message bus is primarily to favor message delivery. However, some additional features have been added to the implementation, such as the previous RPC functionality. In fact, if the message bus is to send and receive messages on a technical level alone. But if you include the subject of sending and receiving messages (that is, the sender and the consumer ) There will be some new positioning. If there are some consumers doing things that are 通用的 very 基础的 , 很多人都需要的 or 纯技术性的 . Then the consumers who deal with this kind of news are offering 服务 . For example, here are the following:

    • Depositing data into Elasticsearch
    • Send SMS
    • Send mail
    • Push messages toward the mobile side

The message bus can provide these services directly for third party applications, of course, if the semantics of the RPC server is also a service (just synchronous service), other queues may also be providing some kind of service, but they are more exclusive, Therefore, the message bus also has the ability to provide services and the basis for building services.

So I've been thinking that the entire route map should be built like this:

This is how you build:

Message Leasing

Because the communication model of a message is inherently asynchronous. The timing of the message is not known to the message bus itself, which results in a long-term backlog of messages that can overwhelm the message bus. Therefore, you might consider changing the permanent presence of a message to an ordered resident. It depends on the business, some business messages are timeliness, so if the message has not been consumed in a few weeks, then it has almost no meaning, and it is in vain to occupy the bus server memory or disk resources, even these messages will never be consumed or possible.

The so-called message leasing, in fact, is to change the current permanent mode to a temporary resident queue mode, how long the specific message can survive, depending on the TTL (Time to live) times set for the message, and the evaluation of the TTL from the queue requester according to their own business characteristics. Of course, the TTL can also be set to permanent, which requires receiving auditing.

The necessity of proxy

There are many modes for the extension of a component, for example, proxy smart client plugin . Message bus package from RABBITMQ, in fact, RABBITMQ official is the plugin extension mechanism with the mode, helpless language limitations, grasp.

proxy smart client compared with the two models, it can be regarded as the strengths and advantages of each other. Invasive, for example, is less proxy intrusive and more manageable in control smart client . There is no longer much comparison here.

Now the extension to RABBITMQ is in the smart client form of. But this approach has its limitations, when you are in a distributed environment, the resources on the server are often shared (such as the queue in RABBITMQ, which can be consumed by multiple clients at the same time), you can consider it as a multi-branch river aggregation scene, the branch is not able to control all , you only rely on proxy Server.

I have seen Ctrip's open source messaging system as kafka mysql a proxy to the front end (they call it broker). Will you build a proxy to see the progress. But the absence of proxy can bring a lot of benefits.

More content Welcome to visit: http://vinoyang.com

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Simplified Client for message bus refactoring

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.