MQ
MQ is a kind of
message middleware. The more mainstream message middleware are ActiveMQ, RabbitMQ and kafka. The company currently uses RabbitMQ.
In the message
middleware, the more important components are Producer, Consumer, Exchange, Queue, and Message.
Before message push, MQ configuration: Producer configures Exchange, Exchange binds queue, and queue binds Consumer.
There are two more important key concepts, one is routing key and the other is binding key.
Routing key: When the producer sends a message to Exchange, it usually specifies a routing key to specify the routing rules of this message, and this routing key needs to be used in conjunction with Exchange Type and binding key to finally take effect. When the Exchange Type and binding key are fixed (in normal use, these contents are usually fixedly configured), our producer can determine where the message flows by specifying the routing key when sending a message to the Exchange. The length limit set by RabbitMQ for routing key is 255 bytes.
binding key: When binding Exchange and Queue, a binding key is generally specified; when a consumer sends a message to Exchange, a routing key is generally specified; when the binding key matches the routing key, the message will be Will be routed to the corresponding Queue. When binding multiple Queues to the same Exchange, these Bindings allow the use of the same binding key. The binding key does not take effect in all cases. It depends on the Exchange Type. For example, a fanout type of Exchange will ignore the binding key and route the message to all Queues bound to the Exchange.
There are four types of Exchange: direct, topic, headers, and fanout.
direct: Route the message to the Queue whose binding key and routing key exactly match.
topic:
Exchange configuration is routing key=*.orange.* --> queue1
routing key=*.*.rabbit --> queue2
routing key=lazy.# --> queue2
Take the above configuration as an example, the message with routingKey=”quick.orange.rabbit” will be routed to Q1 and Q2 at the same time, the message with routingKey=”lazy.orange.fox” will be routed to Q1 and Q2, routingKey=”lazy.brown .fox” messages will be routed to Q2, routingKey=”lazy.pink.rabbit” messages will be routed to Q2 (only delivered to Q2 once, although this routingKey matches both bindingKeys of Q2); routingKey=”quick .brown.fox”, routingKey=”orange”, routingKey=”quick.orange.male.rabbit” messages will be discarded because they do not match any bindingKey.
headers: The Exchange of headers does not rely on the matching rules of routingkey and bindingkey to route messages, but matches based on the headers attribute in the message content sent. When a message is sent to Exchange, RabbitMQ will get the message headers (also in the form of a key-value pair), and compare whether the key-value pair in it exactly matches the key-value pair specified when the Queue is bound to Exchange; if it exactly matches, then The message will be routed to the Queue, otherwise it will not be routed to the Queue.
fanout: Route all messages sent to the Exchange to all Queues bound to it.
The producer pushes the data to the Exchange, the configuration of the Exchange type and the binding of the queue, pushes the message to the corresponding queue, and the queue pushes it to the corresponding consumer that has been bound.
Redis
Redis is an open source (BSD licensed), in-memory data structure storage system, which can be used as a database, cache, and messaging middleware. It supports multiple types of data structures, such as strings, hashes, lists, sets, sorted sets and range queries, bitmaps, hyperloglogs, and geospatial ( geospatial) Index radius query. Redis has built-in replication, LUA scripting, LRU eviction, transactions, and different levels of disk persistence, and through Redis sentinel and automatic partitioning (Cluster) ) Provide high availability.
Nginx
The distribution of tasks among cluster servers is realized through Nginx service.
Nginx is generally used to deal with load balancing issues, and its distribution methods are:
1) Polling (default): Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated.
2) weight: Specify the polling probability, weight is proportional to the access ratio, used for uneven back-end server performance.
3) ip_hash: Each request is allocated according to the hash result of the access ip, so that each visitor has fixed access to a back-end server, which can solve the session problem.
4) Fair: Allocate requests according to the response time of the back-end server, and the short response time is given priority.
5) url_hash: Distribute requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is a cache.