Python Learning-RABBITMQ Chapter

Source: Internet
Author: User
Tags message queue rabbitmq

First, Introduction

RabbitMQ, why is it used? It is used to send messages, Message Queuing, and what difference does it make to the queue and the queue of the process that we learned before in Python? In fact, they are doing the same thing. Let's start with the Python queue that we've been studying.

    1. Thread queue: Only used for synchronization of data between multiple threads.
    2. Process queue: Only the user parent process interacts with the child process, or multiple child processes under the same parent process interact.

If it is two separate programs, even the Python program, two completely independent Python programs still do not use this Python thread or process queue to communicate.

So here's the problem, I'm now two separate Python programs, or Python and Java programs, or with a PHP program, or two separate machines are also involved in the producer consumer model, this time with the Python thread queue and process queue will not be able to communicate. What about that? This time we can only engage in an intermediary agent, this intermediary agent is RABBITMQ.

Second, the way to send messages

Third, RABBITMQ installation

Download directly to the official website under Windows Address: http://www.rabbitmq.com/install-windows-manual.html

After the installation is complete, the RABBITMQ service appears in the Windows service, if no startup suggestions are set to automatically start randomly.

Four, RABBITMQ working principle is as follows:

Five, RABBITMQ basic application Example

1. The main working steps of the producer (Producer) are as follows:

Establish Socket-> declaration Pipeline--declares queue-> to close a connection by sending content to a queue (not directly to a queue) via an empty exchange

Import Pika #通过这个实例先建立一个socketconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) #声明一个管道channel = Connection.channel () #声明queuechannel. Queue_declare (queue= " Basic_1 ") #这边给queue起名字叫" Basic_1 "#n RabbitMQ a message can never be sent directly to the queue, it always needs to go throu GH an exchange.channel.basic_publish (exchange= "",                      routing_key= "Basic_1",  #queue的名字                      body= "Hello World" )  #body是你发送的内容print ("[x] Sent ' Hello World ') #直接关闭连接connection. Close ()

2, the consumer (Consumer) main work steps are as follows:

Create a socket--create a pipe--declaration Queue-> Declaration callback function callback-consumption message--open consumption

# # Consumers are likely to import pika# on other machines to  establish a socket connection connection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) #创建一个管道channel = Connection.channel () #You may ask why we declare the queue again? We have already declared it on our previous code.# we could avoid so if We were sure that the queue already exists. For example if send.py Program#was run before. But we ' re not yet sure which program to run first. In such cases it's a good# practice to repeat declaring the queue in both Programs.channel.queue_declare (queue= "Basic_1") DEF callback (ch,method,properites,body):    print ("--->", ch,method,properites)    print ("[x] Received%r"% Body

' CH: is the address of the memory object of the Send end pipe
Method: Refers to the send side is sent to who, to which Q of some information, generally not how to use
The properties of the properites:send end, this way to the send end sent to the recive end of the property
Body: The message sent by the send side.
‘‘‘

Channel.basic_consume (#消费的消息                      callback,  #如果收到消息, call the callback function to process                      the message queue= "Basic_1", #queue的名字                      No _ack=true) Print (' [*] waiting for messages. To exit Press CTRL + C ') #这个start只要一启动, has been running, it not only received one, but never receive, no news on this side stuck channel.start_consuming ()

Summarize:

1, if the consumers does not declare the queue, then if the consumers start first, will be an error. If it is producer first start, consumers after the start will not error. But if consumer statement, consumer first start will not error. If the producers starts first, it is ignored.
2. All socket transmissions are bytes types.
3, consumers and producers are not necessarily on the same machine, on other machines can also be run.
4, consumers after the start will continue to run, it will be forever to accept. Producers can run multiple times, and consumers will be accepted once once it is run.

VI. Message Distribution Polling

1, a producer corresponding to multiple consumers is the use of polling mechanism, fair to each consumer, each consumer consumption of 1.
2, a producer corresponding to a number of consumers, producers to send multiple messages, is the use of polling mechanism, fair distribution to each consumer.
3, the consumer code in No_ack=true, under normal circumstances is not added, to ensure that the connection is broken, the message will be transferred to the next consumer. When added, on behalf of the consumer to get the data, RANBBITMQ is to delete the data, if the consumer is abnormal at this time, it will result in data loss (delete data need to add the following paragraph in the consumer's callback function:

Channel.basic_ack (Delivery_tag=method.delivery_tag)

)。
4, RABBITMQ judge if the socket is broken, you know the connection is broken, the message will be transferred to the next consumer.
5, the consumer's starting order, represents the first few consumers.

VII. RabbitMQ Message Persistence

1. Queue persistence: When RABBITMQ is restarted, the queue exists, but the data in the queue disappears

In the case of producers and consumers, the queue declaration becomes:

#声明队列queue
Channel.queue_declare (queue= ' Hello ', durable=true)

2. Message persistence: When RABBITMQ is restarted, the queue exists, but the data in the queue still exists, that is, the message persisted

The queue must first be persisted and then modified to be when the message is published on the production side:

#发送数据
Channel.basic_publish (exchange= ', routing_key= ' Hello5 ',
Body= ' hello,you is welcomed!111222 ',
Properties=pika. Basicproperties (
delivery_mode=2,
)
)
Message persistence can be achieved, that is, when the RABBITMQ is restarted, the message will remain in the queue when it is not consumed.

VIII. Fair distribution of information

If rabbit in order to send the message to the individual consumers, regardless of consumer load, it is likely to occur, a machine is not high-profile consumers piled up a lot of message processing, while the high-profile consumers have been very easy. To solve this problem, you can configure perfetch=1 on each consumer side, meaning to tell rabbitmq not to send me any new messages until I have finished with the current consumer message.

channel.basic_qos(prefetch_count=1)

Note that this fairness refers to how much you have the ability of the consumer, on how much work, your consumers to deal with the slower, I will distribute less, your consumer processing more, processing faster, I will send more messages. I server to the client to send a message, first check, you now have how much information, if you deal with more than 1 messages, I will not send you, is your current message has not been processed, I will not send you a message, no news, I will send you.

Ix. Types of Exchange (broadcast mode)

The previous example is basically a queue-level 1-to-1 message sent and received, that is, the message can only be sent to the specified queue, but sometimes you want your message to be received by all the queue, similar to the effect of broadcasting, it is necessary to use Exchange, First of all, the official notes for Exchange:

An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what does with a message it receives. Should it is appended to a particular queue? Should it is appended to many queues? Or should it get discarded. The rules for is defined by the exchange type.

Exchange is defined with a type that determines exactly which queue matches the condition and can receive messages

    1. Fanout: All bind to this Exchange queue can receive messages (broadcast-only, all consumers can receive messages)
    2. Direct: The only queue that is determined by Routingkey and exchange can receive messages
    3. Topic: All Routingkey that conform to Routingkey (which can be an expression at this time) can receive messages from the bind queue
    4. Headers: Use headers to decide which queue to send messages to (this is rarely used, and we don't normally use them)
9.1 fanout Broadcast mode

Description: Fanout This mode is a message that all the queue that binds to exchange can receive. Exchange=> Converters

Producer (Fanout_publiser)

Note: Unlike previous writes, the producer does not declare a queue because the producer is in the form of a broadcast, so there is no need to declare a queue here.

Import Pika #创建socket连接connection = Pika. Blockingconnection (Pika. Connectionparameters                                     (host= ' localhost ')) #创建管道channel = Connection.channel () #声明exchange, and exchange's name is logs, Exchange is of type Fanoutchannel.exchange_declare (exchange= ' logs ', exchange_type= "fanout") #发送的消息message = "Info:hello World "#生产一个消息channel. Basic_publish (    exchange=" Logs ",    routing_key=",    body=message) print ("[X] Send {0 } ". Format (message) #关闭连接connection. Close ()

Consumer (Fanout_consumer)

Description: Consumer side to declare a unique queue_name object, and get the queue name from the object

Import pika# Create a socketconnection = Pika. Blockingconnection (Pika. Connectionparameters (        host= "localhost")) #创建一个管道channel = Connection.channel () #声明exchange, Exchange's name logs, The type is fanout broadcast mode Channel.exchange_declare (exchange= "Logs",                         exchange_type= "Fanout") #不指定queue名字, rabbit randomly assigns a name, Exclusive=true will automatically delete the queue after the consumer using this queue is disconnected, result is the object of the queue result = Channel.queue_declare (exclusive=true) # Exclusive=> exclusive, Unique # gets the queue name queue_name = result.method.queue# bound Exchangechannel.queue_bind (exchange= "Logs",                   queue=queue_name) Print (' [*] waiting for logs. To exit Press CTRL + C ') #声明回调函数def callback (ch,method,properties,body):    "callback function"    print ("[X] {0}". Format (body)) # Consumer spending Channel.basic_consume (callback,                      Queue=queue_name,                      no_ack=true) #启动消费模式channel. start_consuming ()

  

1, the service side does not declare the queue, why the client to declare a queue?
When a producer sends a message to exchange, Exchange iterates through all the queue that binds it, sends the message to the queue, sends the queue, and the consumer receives it from the queue, so it gets the broadcast. Instead of saying that Exchange sends messages directly to consumers, consumers only read messages from the queue and hold the queue to bind to exchange.
2. Why does the queue have to be generated automatically instead of being written manually?
This queue is for broadcast only, queues are generated automatically when the consumer connects, each queue generated is different, when the consumer stops spending, the queue is automatically destroyed

3, broadcast real-time nature
The broadcast is real-time, when you are not in the time, is that you do not open the consumer, when the message was sent, did not receive, this time there is no. If the consumer opens, the producer sends the message, the consumer is received, this is called the subscription release, the radio mode

9.2 Direct Broadcast mode

The queue binding keyword, in which the sender sends data to a message based on a keyword exchange,exchange the data to the specified queue according to the keyword.

Direct broadcast mode logic diagram:

Producer Code:

Import Pika,sys connection = Pika. Blockingconnection (Pika. Connectionparameters                                     ("localhost")) Channel = Connection.channel () #定义direct类型的exchangechannel. Exchange_ Declare (exchange= "Direct_logs",                         exchange_type= "direct") #定义重要程度, define what level of log severity = sys.argv[1] If Len (sys.argv ) > 1 Else "info" message = '. Join (sys.argv[2:]) or "Hello World" #发送消息channel. Basic_publish (exchange= "Direct_logs", C2/>routing_key=severity,                      body=message                      ) Print ("[x] Sent%r:%r"% (severity, message)) Connection.close ()

Consumer Code:

Import Pika,sys connection = Pika. Blockingconnection (Pika. Connectionparameters                                     ("localhost")) Channel = Connection.channel () #定义direct类型的exchangechannel. Exchange_ Declare (exchange= "Direct_logs", exchange_type= "direct") result = Channel.queue_declare (exclusive=true) queue_name = result.method.queue# Manual input Security Level severities = Sys.argv[1:]if not severities:    sys.stderr.write ("Usage:%s [INFO] [ WARNING] [error]\n "% sys.argv[0])    sys.exit (1) #循环遍历绑定消息队列for severity in severities:    channel.queue_bind ( Exchange= "Direct_logs",                       queue=queue_name,                       routing_key=severity) print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch,method,properites,body):    "callback function"    print ("[x]%r:%r"% (method.routing_key , body)) #消费消息channel. Basic_consume (callback,queue=queue_name,no_ack=true) channel.start_consuming ()

This mode consumes different broadcast data depending on the parameters of the server executing the program.

Consumers need to add parameter Info warning error One of them, that is, the severity of the consumption is specified

When the production end is started by default is not entered as the info level, otherwise, according to the input, according to the specified Routing_key send data.

9.3 Topic Granular message filtering mode

In direct mode we make a distinction between the error, warning binding level and the message. We go back to the log, if you want to make a more detailed distinction, for example, you are now searching for error, there is warning, there is a system log on Linux, the system log to search all applications of the system log. All programs are logged in this log. So if I want to divide it. What is the log issued by MySQL and what is Apache sent out of the log. The MySQL log is also info and contains Warning,error. Apache is the same, so we need to make finer distinctions and more granular message filtering.

Topic Broadcast Mode logic diagram:

Code implementation:

Producers:

Import Pika,sys connection = Pika. Blockingconnection (Pika. Connectionparameters                                     ("localhost")) Channel = Connection.channel () #声明一个topic的exchangechannel. Exchange_declare (exchange= "Topic_logs",                         exchange_type= "topic") Routing_key = sys.argv[1] If len (SYS.ARGV) > 1 Else " Anonymous.info "message =". Join (sys.argv[2:]) or "Hello World" channel.basic_publish (exchange= "Topic_logs",                      Routing_key=routing_key,                      body=message) print ("[x] Sent%r:%r"% (routing_key, message)) Connection.close ()

Consumers:

Import Pika,sys connection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connection.channel () #声明topic类型的exchangechannel. Exchange_declare ( Exchange= "Topic_logs",                         exchange_type= "topic") result = Channel.queue_declare (exclusive=true) queue_name = Result.method.queue Banding_keys = Sys.argv[1:]if not banding_keys:    sys.stderr.write ("Usage:%s [binding_key]...\ N "% sys.argv[0])    sys.exit (1) #循环绑定queuefor Banding_key in Banding_keys:    channel.queue_bind (exchange=" Topic_ Logs ",                       Queue=queue_name,                       routing_key=banding_key) print (' [*] waiting for logs. To exit Press CTRL + C ') #回调函数def callback (ch,method,properites,body):    "callback function"    print ("[x]%r:%r"% ( Method.routing_key, body)) #消费者消费channel. Basic_consume (callback,queue=queue_name,no_ack=true) Channel.start_ Consuming ()

  

Production Side Execution:
Python topic_sender.py mysql.info system srated sucess!
Python topic_sender.py app.error Nullpiont error!

That is, the field is indicated in the first parameter of the producer,
If applied App.error app.info app.warning
MySQL mysql.error mysql.info mysql.warning
Wait a minute

The second parameter writes specific information about various situations, such as exceptions, information, and warnings.

When consuming, match according to the following rules
To receive all the logs run: = =
# is matched to all the
Python receive_logs_topic.py "#"

#只匹配app开头的
Python receive_logs_topic.py "app.*"

#只匹配error结尾的
Python receive_logs_topic.py "*.error"

You can create multiple bindings:
#创建多个接收队列
Python receive_logs_topic.py "app.*" "*.error"
#只匹配一类消息
Python receive_logs_topic.py "App.info"

9.4 RabbitMQ RPC Implementation

Before we are all one-way send message, the client sends a message to the server, then the problem comes, I now send a command to the remote client, let it go to execute, after the result, I want to return this result. What is this model called, this model is called Rpc=>remote procedure call.

How to return this message?

Answer: Both the server side and the client are both consumers and producers.

RPC Mode logic diagram:

Code implementation:

RPC----CLIENT

Import Pika,uuid,time class Fibonaccirpcclient (object): "Fibonacci array RPC Client" Def __init__ (self): Self.connection = Pika. Blockingconnection (Pika. Connectionparameters (host= "localhost")) Self.channel = Self.conne Ction.channel () result = Self.channel.queue_declare (exclusive=true) Self.callback_queue = Result.method.queu E Self.channel.basic_consume (Self.on_response,no_ack=true, Queue=self.callback_qu EUE) def on_response (self,ch,method,props,body): Print ("---->", method,props) if self.corr_id = = Props . correlation_id: #我发过去的结果就是我想要的结果, maintain data consistency Self.response = Body def call (self,n): Self.response = No NE self.corr_id = str (UUID.UUID4 ()) Self.channel.publish (exchange= "", Routing_ke Y= "Rpc_queue", Properties=pika. Basicproperties (reply_to=self.callback_queue, correlation_id=self.corr_id), BODY=STR (n            ) while Self.response is None:self.connection.process_data_events () #非阻塞版的start_consumer () Print ("No msg ...") time.sleep (0.5) return int (self.response) If __name__ = = "__main__": FIBONACCI_RP c = fibonaccirpcclient () print ("[x] requesting FIB ()") Response = Fibonacci_rpc.call ("[.]" Print ("[.] Got%r "% response)

Note:
1, want to not block, but want to every time, come over to check, you can not use Start_consumer, but with Connection.process_data_evevts (), it is not blocked, if you receive the message, receive the message also returned, Continue to execute.
2, reply_to is to let the server after executing the command, the results returned to the queue inside.
3, the code in while Self.respose is none I can not do time.sleep, I can send messages to the server side, this message is not necessarily in order to the server side, if not do self.corr_id = = PROPS.CORRELATION_ID verification, the data may not be right.

RPC----SERVER

import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= "localhost")) Channel = Connection.channel () channel.queue_declare (queue= "Rpc_queue") def FIB (N): "Fibonacci sequence" if n = = 0:return 0 elif n = = 1:return 1 else:return fib (n-1) + fib ( N-2) def on_request (ch, method, props, body): n = Int (body) print ("[.]                     FIB (%s) "% n" response = FIB (n) ch.basic_publish (exchange= "", Routing_key=props.reply_to, Properties=pika.                     Basicproperties (correlation_id= props.correlation_id), # Props is the client's message, this way correlation_id back to the client to do authentication Body=str (response)) Ch.basic_ack (Delivery_ Tag=method.delivery_tag) Channel.basic_qos (prefetch_count=1) channel.basic_consume (on_request, queue= "Rpc_queue") Print ("[x] awaiting RPC requests") channel.start_consuming () 

  

Note: props.reply_to, this is the queue that the client returns to.

If the client and the service are using the same queue and the client is sent to Rpc_queue, then the client receives its own message, forming a dead-end, unable to process the data

Python Learning-RABBITMQ Chapter

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.