The tenth week of the Python Automation development Study----RabbitMQ

Source: Internet
Author: User
Tags message queue rabbitmq

RabbitMQ delivery of Message Queuing messages

Installing http://www.rabbitmq.com/install-standalone-mac.html

If you are installing on windows, you also want to install the Erlang language

Install Python RabbitMQ

Pip install pikaoreasy_install pikaor source  Https://pypi.python.org/pypi/pika

To achieve the simplest queue communication

Http://www.rabbitmq.com/getstarted.html

Producer (producer)

Import  pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) #建立一个socketchannel = Connection.channel () #建立一个管道channel. Queue_declare (queue= " Hello ") #声明queuechannel. Basic_publish (exchange=" ", routing_key=" Hello ", body=" Hello World ") #n RabbitMQ a message can Never be sent directly to the queue, it always needs to go through an exchange.print ("produce send to consume") connection. Close () #关闭

Consumer (consumer)

Import pikaconnetion = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connetion.channel () channel.queue_declare (queue= "Hello") #You may ask why We declare the queue again? We have already declared it on our previous code.# we could avoid so if We were sure that the queue already exists. For example if send.py Program#was run before. But we ' re not yet sure which program to run first. In such cases it's a good# practice to repeat declaring the queue in both Programs.def callback (Ch,method,properties,body) :    print (ch,method,properties)    print (body) channel.basic_consume (callback,queue= "Hello", no_ack=true) Print ("Waiting for  messages. To exit Press CTRL + C ") channel.start_consuming ()

Message Distribution Polling

This is a one-to-many situation where a producer corresponds to multiple consumers

In this mode, RABBITMQ will distribute P-sent messages to individual consumers (c) By default, similar to load balancing

Producer

Import  pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) #建立一个socketchannel = Connection.channel () #建立一个管道channel. Queue_declare (queue= " Hello ") #声明queuechannel. Basic_publish (exchange=" ", routing_key=" Hello ", body=" Hello World ") #n RabbitMQ a message can Never be sent directly to the queue, it always needs to go through an exchange.print ("produce send to consume") connection. Close () #关闭

Consumer

Import pikaconnetion = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connetion.channel () channel.queue_declare (queue= "Hello") #You may ask why We declare the queue again? We have already declared it on our previous code.# we could avoid so if We were sure that the queue already exists. For example if send.py Program#was run before. But we ' re not yet sure which program to run first. In such cases it's a good# practice to repeat declaring the queue in both Programs.def callback (Ch,method,properties,body) :    print (ch,method,properties)    print (body)    ch.basic_ack (Delivery_tag = Method.delivery_tag)  #手动确认消息完毕channel.basic_consume (callback,queue= "Hello")  #这里主要把no_ack去掉print ("Waiting  for Messages. To exit Press CTRL + C ") channel.start_consuming ()

Start the producer first, then start 3 consumers, the producer sends several messages, you will send the first message to be received by the consumer in turn.

If the producer is sending the data, suddenly the consumer disconnects, how to protect the data is not lost?

Remove the no_ack in the consumer, if the producer is sending, suddenly the consumer disconnects, then the first consumer is not finished, go to the 2nd consumer to receive, then disconnect, go to the 3rd consumer, and so on.

Message persistence

If the producer suddenly disconnects when sending the data, it will cause the message and message queue to be lost, how can we guarantee that the message and message queue are not lost when the producer is disconnected?

Channel.queue_declare (queue= ' Hello ', durable=true) #保证消息队列不丢失

Channel.basic_publish (exchange= ",                      routing_key=" Hello ",                      body=message,                      Properties=pika. Basicproperties (                         Delivery_mode = 2, # make message persistent guarantee message not lost                      ))

Fair distribution of messages

If rabbit in order to send the message to the individual consumers, regardless of consumer load, it is likely to occur, a machine is not high-profile consumers piled up a lot of message processing, while the high-profile consumers have been very easy. To solve this problem, you can configure perfetch=1 on each consumer side, meaning to tell rabbitmq not to send me any new messages until I have finished with the current consumer message.

Full code for message Persistence + fair distribution of messages

Producer

Import  pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) #建立一个socketchannel = Connection.channel () #建立一个管道Channel.queue_declare ( Queue= "Hello1",                      durable=true                      ) #声明queuechannel. Basic_publish (exchange= "",                      routing_key= "Hello1",                      body= "Hello World",                      Properties=pika. Basicproperties                      (Delivery_mode = 2,) # make message persistent                      )print ("Produce send to consume") Connection.close () #关闭

Consumer

Import pikaconnetion = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connetion.channel () channel.queue_declare (queue= "Hello1",                      Durable =TRUE) def callback (ch,method,properties,body):    print (ch,method,properties)    print (body)    Ch.basic_ack (Delivery_tag=method.delivery_tag) Channel.basic_qos (prefetch_count=1) #消息公平分发主要是添加prefech_count =1channel.basic_ Consume (                    callback,                    queue= "Hello1"                    ) print ("Waiting for  messages. To exit Press CTRL + C ") channel.start_consuming ()

Message Publishing and Subscriptions

The previous example is basically 1 to 1 of the message sent and received, that is, the message can only be sent to the specified queue, but sometimes you want your message to be received by all the queue, similar to the effect of broadcasting, it is necessary to use Exchange,

An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what does with a message it receives. Should it is appended to a particular queue? Should it is appended to many queues? Or should it get discarded. The rules for is defined by the exchange type.

Exchange is defined with a type that determines exactly which queue matches the condition and can receive messages


Fanout: All bind to this Exchange queue can receive messages


Direct: The only queue that is determined by Routingkey and exchange can receive messages


Topic: All Routingkey that conform to Routingkey (which can be an expression at this time) can receive messages from the bind queue

Expression symbol Description: #代表一个或多个字符, * represents any character
Example: #.a will match A.A,AA.A,AAA.A, etc.
*.A will match A.A,B.A,C.A, etc.
Note: Using Routingkey for #,exchange type topic is equivalent to using fanout

Headers: Decide which queue to send messages to by headers

fanout broadcast Receive Message

Fanout_produce

Import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connection.channel () channel.exchange_declare (exchange= "Logs", Exchange_type= "Fanout") channel.basic_publish (exchange= "Logs",                      routing_key= "",                      body= "Hello world!555"                      ) Print ("[x] Sent Hello World") connection.close ()

Fanout_consume

Import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel = Connection.channel () channel.exchange_declare (exchange= "Logs", Exchange_type= "fanout") result = Channel.queue_declare (exclusive=true) #不指定queue名字, rabbit randomly assigns a name, exclusive= True will automatically delete the queue after the consumer using this queue disconnects queuename = Result.method.queuechannel.queue_bind (exchange= "Logs", queue= queuename) Print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch,method,properties,body):    print (body) Channel.basic_consume (                    callback,                    queue=queuename,                    no_ack=true                    ) channel.start_consuming ()
Direct has a selective receive message

RABBITMQ also supports the sending of keywords according to the keyword, that is: the queue binding keyword, the sender sends the data according to the keyword to the message exchange,exchange according to the keyword decision should send the data to the specified queue.

Direct_producer

Import Pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (    host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' Direct_logs ',                         exchange_type= "direct") severity = sys.argv[1] If len (SYS.ARGV) > 1 Else ' info ' message = '. Join (SYS . Argv[2:]) or ' Hello world! ' Channel.basic_publish (exchange= ' direct_logs ',                      routing_key=severity,                      body=message) print ("[x] Sent%r:%r"% (severity, message)) Connection.close ()

Direct_consumer

import pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' direct_ Logs ', exchange_type= "direct") result = Channel.queue_declare (exclusive=true) queue_name = result.me Thod.queueseverities = Sys.argv[1:]if not Severities:sys.stderr.write ("Usage:%s [INFO] [WARNING] [error]\n"% Sys.arg V[0]) Sys.exit (1) for severity in Severities:channel.queue_bind (exchange= ' direct_logs ', queue= Queue_name, routing_key=severity) print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch, method, properties, body): Print ("[x]%r:%r"% (Method.routing_key, body)) Chann El.basic_consume (Callback, Queue=queue_name, No_ack=true) channel.start_consuming ()
topic receive messages in more detail

To receive all the logs run:

"#"

To receive all logs from the facility "Kern":

"Kern.*"

Or If you want to hear only about "critical" logs:

"*.critical"

You can create multiple bindings:

"*.critical"

And to emit a log with a routing key "kern.critical" type:

"A Critical kernel Error"

Topic_producer

Import Pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (    host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' Topic_logs ',                         exchange_type= "topic") severity = sys.argv[1] If len (SYS.ARGV) > 1 Else ' info ' message = '. Join (SYS.A Rgv[2:]) or ' Hello world! ' Channel.basic_publish (exchange= ' topic_logs ',                      routing_key=severity,                      body=message) print ("[x] Sent%r:%r"% ( Severity, message)) Connection.close ()

Topic_consumer

Import Pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (    host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' Topic_logs ',                         exchange_type= "topic") result = Channel.queue_declare (exclusive=true) queue_name = Result.method.queueseverities = Sys.argv[1:]if not severities:    sys.stderr.write ("Usage:%s [INFO] [WARNING] [ERROR ]\n "% sys.argv[0])    sys.exit (1) for severity in severities:    channel.queue_bind (exchange= ' topic_logs ',                       Queue=queue_name,                       routing_key=severity) print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch, method, properties, body):    print ("[x]%r:%r"% (Method.routing_key, body)) Chan Nel.basic_consume (callback,                      Queue=queue_name,                      no_ack=true) channel.start_consuming ()

Remote Procedure Call (RPC) remoting procedure Calls

RPC refers to a remote procedure call, that is, two servers A, B, an application is deployed on a server, to invoke the B server on the application provided by the function/method, because it is not a memory space, can not be directly called, you need to express the semantics of the call through the network and communicate the call data.

Why RPC? is a requirement that cannot be done within a process, or even within a computer, by local invocation, such as communication between different systems, or even between different organizations. Because computing power requires scale-out, applications need to be deployed on clusters of multiple machines,

RPC has many protocols, such as the RPC style of the earliest Corba,java rmi,web service, Hessian,thrift, and even the rest API.

RPC Processing Flow:

    1. When the client starts, an anonymous callback queue is created.
    2. The client sets 2 properties for the RPC request: ReplyTo, sets the callback queue name, Correlationid, marks the request.
    3. The request is sent to the Rpc_queue queue.
    4. The RPC server listens for requests in the Rpc_queue queue, and when a request arrives, the server processes and sends a message with the result to the client. The received queue is the callback queue set by ReplyTo.
    5. The client listens to the callback queue and, when there is a message, checks the Correlationid property, which is the result if it matches the request.

Rpc_server

Import Pikaimport timeconnection = Pika. Blockingconnection (Pika. Connectionparameters (    host= ' localhost ')) channel = Connection.channel () channel.queue_declare (queue= ' Rpc_queue ') def fib (n):    if n = = 0:        return 0    elif n = = 1:        return 1    else:        return fib (n-1) + fib (n-2) def on _request (ch, method, props, body):    n = int (body)    print ("[.] FIB (%s) "% n"    response = FIB (n)    ch.basic_publish (exchange= ",                     routing_key=props.reply_to,                     Properties=pika. Basicproperties (correlation_id=                                                          props.correlation_id),                     body=str (response))    Ch.basic_ack ( Delivery_tag=method.delivery_tag) Channel.basic_qos (prefetch_count=1) channel.basic_consume (on_request, queue= ' Rpc_queue ') print ("[x] awaiting RPC requests") channel.start_consuming ()

Rpc_client

Import Pikaimport Uuidclass fibonaccirpcclient (object): Def __init__ (self): Self.connection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) Self.channel = Self.connection.channel () result = SELF.C Hannel.queue_declare (exclusive=true) self.callback_queue = Result.method.queue Self.channel.basic_consume (SE Lf.on_response, No_ack=true, Queue=self.callback_queue) def on_response (self, CH, met        Hod, props, body): if self.corr_id = = Props.correlation_id:self.response = Body def call (self, n):                                   Self.response = None self.corr_id = str (UUID.UUID4 ()) Self.channel.basic_publish (exchange= "', Routing_key= ' Rpc_queue ', Properties=pika.                                       Basicproperties (Reply_to=self.callback_queue,     CORRELATION_ID=SELF.CORR_ID,                              ), BODY=STR (n)) while Self.response is None:  Self.connection.process_data_events () return int (self.response) Fibonacci_rpc = Fibonaccirpcclient () print ("[x] Requesting fib () response = Fibonacci_rpc.call ("[.]") Print ("[.] Got%r "% response)

The tenth week of the Python Automation development Study----RabbitMQ

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.