HelloWorld
Brief introduction
RabbitMQ: Accepts message re-delivery messages, which can be treated as a "post office." The sender and receiver interact through the queue, the size of the queue can be considered unlimited, multiple senders can take place to a queue, and multiple receivers can accept messages from one queue.
Code
The protocol used by RABBITMQ is AMQP, and the recommended client for Python is Pika
Pip Install Pika-i https://pypi.douban.com/simple/
send.py
# Coding:utf8import pika# Establish a connection connection = Pika. Blockingconnection (Pika. Connectionparameters ( ' localhost ') # Connect local RABBITMQ Server channel = Connection.channel () # get Channel
The link here is the local, if you want to connect to other machines on the server, just fill in the address or host name.
Next we begin to send a message, note to ensure that the queue that accepts the message exists, otherwise RABBITMQ discards the message
Channel.queue_declare (queue= ' Hello ') # Create Hello in rabbitmq in this queue channel.basic_publish (exchange= ", # Use default Exchange to send messages to the queue routing_key= ' Hello ', # sent to the queue hello in body= ' Hello world! ') # message Content Connection.close () # Close Simultaneous flush
RABBITMQ requires 1GB of free disk space by default, otherwise the send will fail.
A message has been stored in the local queue Hello, if you use Rabbitmqctl list_queues to see
Hello 1
Description There is a Hello queue with a message stored inside
receive.py
# Coding:utf8import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ( ' localhost ')) channel = Connection.channel ()
or link to the server first, same as before send
Channel.queue_declare (queue= ' Hello ') # Here is the declaration to make sure that the queue Hello presence can be declared multiple times here primarily to prevent the receiving program from running first error def callback (CH, method, Properties, body): # used to receive the message after the callback print ("[x] Received%r"% body) Channel.basic_consume (callback, queue= ' Hello ', # message no_ack=true for specified queue Hello ) #在处理完消息后不发送ack给服务器channel. start_consuming () # Start Message Accept This will go into a dead loop.
Work Queue (Task queue)
Work queues are used to distribute time-consuming tasks to multiple worker processes. Instead of immediately doing the resource-intensive tasks (which need to wait for these tasks to complete), they are executed after the tasks are scheduled. For example, we send a task as a message to the queue, start a worker process to accept and eventually execute, and start multiple worker processes to work. This applies to Web applications where complex tasks should not be done within the processing window of an HTTP request.
Channel.basic_publish (exchange= ', routing_key= ' Task_queue ', body=message, Properties=pika. Basicproperties ( Delivery_mode = 2, # makes the message persistent ))
The message is allocated in such a way that each worker process gets the same number of messages as the polling.
Message ack
If a message is assigned to a worker process, but the worker process does not finish processing the achievement crash, the message may be lost because RABBITMQ deletes the message once it is distributed to the worker process.
To prevent message loss, RABBITMQ provides an ACK that the worker process sends an ACK to RABBITMQ after it receives the message and processes it, informing RABBITMQ that the message can be removed from the queue. If the worker process is hung and RABBITMQ does not receive an ACK, the message is redistributed to other worker processes. You do not need to set a timeout, even if the task takes a long time to process.
The ACK is turned on by default, and before our worker process shows the specified no_ack=true
Channel.basic_consume (callback, queue= ' Hello ') # will enable ACK
Callback with ACK:
DEF callback (ch, method, properties, body): print "[x] Received%r"% (body,) time.sleep (Body.count ('. ' )) print "[x] Done" ch.basic_ack (Delivery_tag = method.delivery_tag) # Send Ack
Message persistence
However, sometimes the RABBITMQ is restarted and the message is lost. You can set persistence when creating a queue:
(The nature of the queue cannot be changed once determined)
Channel.queue_declare (queue= ' task_queue ', durable=true)
The persisted properties of the message must also be set when sending the message:
Channel.basic_publish (exchange= ",
Routing_key= "Task_queue", body=message, Properties=pika. Basicproperties ( Delivery_mode = 2, # make message persistent ))
However, if RABBITMQ just received the message before it can be stored, the message will still be lost. At the same time, RABBITMQ is not receiving every message in the disk operation. If you need a more complete guarantee, you need to use Publisher confirm.
Fair distribution of information
Message distribution in polling mode may not be fair, for example, if the odd message is a heavy task, some processes will continue to run heavy tasks. Even if there is a backlog of messages on a worker process that is not processed, many of which do not have an ACK, RABBITMQ will send the message to it in order. You can add settings in the Accept process:
Channel.basic_qos (Prefetch_count=1)
Inform the RABBITMQ so that the message is no longer assigned to a worker process without a postback ack.
Mass
In general, a message is sent to a worker process, then completed, and sometimes you want to send a message to multiple processes at the same time:
Exchange
The sender is not sending a message directly to the queue, in fact the person who does not know that the message will be sent to that queue, the sender can only send messages to exchange. Exchange sends messages to producers on the one hand and pushes them to the queue on the other. So as exchange, it needs to know what it needs to do when it receives the message, whether it should be added to a special queue or in a lot of queues, or discarded. Exchange has direct, topic, headers, fanout and other kinds, and the mass use of the fanout. Prior to publishing the message, the value of Exchange was "use default Exchange."
Channel.exchange_declare (exchange= ' logs ', type= ' fanout ') # The Exchange sends the message to all of the queues it knows
Temporary queue
result = Channel.queue_declare () # Create a random queue result = Channel.queue_declare (exclusive=true) # Create a random queue, At the same time, the queue is destroyed when no recipient is connected queue_name = Result.method.queue
This result.method.queue is a queue name that can be used when sent or accepted.
Binding Exchange and Queues
Channel.queue_bind (exchange= ' logs ', queue= ' Hello ')
Logs sends a copy of the message to hello.
When you send a message, you use the logs exchange that you just created
Channel.basic_publish (exchange= ' logs ', routing_key= ', body=message)
Routing
You have previously used bind, which is the relationship between Exchange and queue (the queue is interested in messages from this exchange), and you can specify the Routing_key option when bind.
Using Direct exchange
Sends the message corresponding to routing key to the queue that binds the same routing key
Channel.exchange_declare (exchange= ' direct_logs ', type= ' direct ')
Send functions to publish messages of different severity:
Channel.basic_publish (exchange= ' direct_logs ', routing_key=severity, body=message)
The accepted function is bound to the corresponding severity:
Channel.queue_bind (exchange= ' direct_logs ', queue=queue_name, routing_key=severity)
Using topic Exchange
The previously used direct Exchange can only bind a routing key, and this can be used to take. Topic exchange that separates routing key, for example:
"Stock.usd.nyse" "NYSE.VMW"
As with direct exchange, the key that is bound to the recipient is the same as the routing key specified at the time of sending, with some special values:
* represents 1 Words # for 0 or more words
If the sender sends out the routing key is 3 parts, such as: Celerity.colour.species.
Q1:*.orange.* corresponds to the middle colour all for orange q2:*.*.rabbit corresponds to the last part of the species for rabbit corresponds to the first part is the lazy
Qucik.orange.rabbit Q1 Q2 can be received, Quick.orange.fox only Q1 can accept, for lazy.pink.rabbit although matched to Q2 two times, but only sent once. If binding is bound to # directly, you will receive all of the.
Rpc
Run a function on the remote machine and get the result.
1, the client initiates setting up a temporary queue to accept the callback, bind the queue
Self.connection = Pika. Blockingconnection (Pika. Connectionparameters ( host= ' localhost ')) Self.channel = Self.connection.channel () result = Self.channel.queue_declare (exclusive=true) self.callback_queue = Result.method.queue self.channel.basic_ Consume (Self.on_response, no_ack=true, queue=self.callback_queue)
2, the client sends the RPC request, accompanied by the reply_to corresponding callback queue, correlation_id set to the unique ID of each request (although it is possible to create a callback queue for each RPC request, but this is not efficient, if a client uses only one queue, You need to use correlation_id to match which request), and then block the callback queue until you receive a reply
Note: If you receive an illegal correlation_id direct discard, because of this situation-the server has responded but has not yet sent an ACK to hang, wait a while the server restarts and will re-process the task, and sent again, but then the request has been disposed of
Channel.basic_publish (exchange= ', routing_key= ' Rpc_queue ', Properties=pika. Basicproperties ( reply_to = self.callback_queue, correlation_id = self.corr_id, ), body=str (n)) # Make a call while Self.response is None: # This is the equivalent of blocking the self.connection.process_data_events () # View callback queue return Int (self.response)
3. The request is sent to the Rpc_queue queue
4, RPC server removed from Rpc_queue, execute, send reply
Channel.basic_consume (on_request, queue= ' Rpc_queue ') # bind waits for request # after processing: Ch.basic_publish (exchange= ", routing _key=props.reply_to, Properties=pika. Basicproperties (correlation_id = \ props.correlation_id), body=str (response)) # Send reply to callback queue Ch.basic_ack (Delivery_tag = Method.delivery_tag) # Send ACK
5, the client pulls the data from the callback queue, checks the correlation_id, performs the corresponding operation
if self.corr_id = = props.correlation_id: self.response = Body