The rabbitmq of Python

Source: Internet
Author: User
Tags message queue string format rabbitmq

Introduction RABBITMQ is a complete, reusable enterprise messaging system based on AMQP. He follows the Mozilla Public License open source agreement. MQ is all called the message Queue, Message Queuing(MQ) is an application-to-application communication method. Applications communicate by reading and writing messages to and from the queue (data for the application), without requiring a dedicated connection to link them. Message passing refers to the process of communicating between programs by sending data in a message, rather than by directly invoking each other, and directly invoking techniques such as remote procedure calls. Queuing refers to an application communicating through a queue. The use of queues removes the requirement that both the receiving and sending applications execute concurrently. RABBITMQ Installation
    • Linux installation, Environment CENTOS7
Installation configuration Epel Source # RPM-IVH http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm installation erlang# yum-y Install Erlang installation rabbitmq# yum-y install Rabbitmq-serverservice rabbitmq-server start/stop Note: You need to turn off the firewall and selinux# Systemctl stop firewalld.service# Setenforce 0
    • Ubuntu Installation
# apt-get Install build-essential# apt-get install libncurses5-dev# apt-get install libssl-dev# apt-get install Erlang Edit/ Etc/apt/sources.list join Deb http://www.rabbitmq.com/debian/testing main# wget-o-https://www.rabbitmq.com/ RABBITMQ-RELEASE-SIGNING-KEY.ASC | Apt-key add-# apt-get update# apt-get Install Rabbitmq-server
    • Windows installation
First, go to the Erlang website to download the Erlang  address: http://www.erlang.org/download.html and then install Erlang. Then set the environment variables for Erlang. Add the Erl_home = Erlang installation directory to the environment variable in path, and then%erl_home%\bin from http://www.rabbitmq.com/releases/rabbitmq-server/v3.1.5/ Rabbitmq-server-3.1.5.exe Download the rabbitmq-server and install it. And then find Rabbitmq-start open from the Start menu. Rabbitmq-server started right now.
    • Installing the Python API
Pip Install Pikaoreasy_install Pika
Realization of production consumer model based on queue

#!/usr/bin/env python3#coding:utf8import queueimport threadingmessage = queue. Queue (+) def producer (i): "     chef, production stuffed bun into queue" while     True:          message.put (i)          print ("%s into queue%s"% ( Threading.current_thread (). Name, i) def consumer (i):     "Consumer, take bun from queue" while     True:          msg = Message.get ()          Print ("%s total out of queue%s"% (Threading.current_thread (). Name, msg)) if __name__ = = ' __main__ ': For I in range (12): # Chef's thread bun C8/>t = Threading. Thread (Target=producer, args= (i,))     T.start () for I in Range (10): # Consumer threads eat buns     t = Threading. Thread (Target=consumer, args= (i,))     T.start ()

  

RabbitMQ use for RabbitMQ, production and consumption no longer target a queue object in memory, but rather a message queue implemented by RabbitMQ server on a single server. Official document: Http://www.rabbitmq.com/getstarted.html's most basic production consumer model
    • Producer Code
#!/usr/bin/env python 3import pika#########  producer ######### #链接rabbit服务器 (localhost is native, if other server please modify to IP address) connection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) #创建频道channel = Connection.channel () #创建一个队列名叫testchannel. Queue_declare ( Queue= ' Test ' # channel.basic_publish send message to queue # Exchange-it allows us to specify exactly which queue the message should go to. # ROUTING_KEY Specifies to which queue the message is sent # body is the content to insert, string format while True:  # Loop sends information to the queue, quit quit program    INP = input (">>>"). Strip ()    if InP = = ' quit ': Break    channel.basic_publish (exchange= ',                             routing_key= ' test ',                          body= INP)    print ("producer sends information to queue%s"% INP) #缓冲区已经flush而且消息已经确认发送到了RabbitMQ中, close link connection.close () # Output Results >>> The Python producer sends the message to the queue Python>>>quit
    • Consumer Code
#!/usr/bin/env python 3import pika######### consumer ########## link rabbitconnection = pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ') # Create channel = Connection.channel () # If the producer does not run the Create queue, then the consumer may not find the queue. To avoid this problem, all consumers also create this queue, if the queue already exists, then this invalid Channel.queue_declare (queue= ' test ') #接收消息需要使用callback这个函数来接收, he will be called Pika Library, The data received are byte-type DEF callback (ch, method, properties, body): "" "Ch: Means Channel method: queue name Prope Rties: Property set when connecting RABBITMQ body: Content fetched from the queue, data obtained when byte type "" "Print (" [x] Received%r "% body) # Channel.basic_c Onsume means fetching data from the queue, and if the data is taken then the callback function is executed, callback is the callback function # No_ack=true indicates that the completion status is not actively notified after consuming the message rabbitmqchannel.basic_ Consume (callback, queue= ' test ', no_ack=true) print (' [*] wait for information. To exit Press CTRL + C ') #永远循环等待数据处理和callback处理的数据, the Start_consuming method blocks the Loop execution channel.start_consuming () # Output, Waits for a message to be processed in the queue, without terminating it, unless the human is CTRL + C [*] waiting for the message, to exit press CTRL + C [x] Received B ' python '

  

Note: Both the producer and the consumer are connected to the RABBITMQ Server, creating a queue with the same name, the producer sends a message to the team, the consumer gets the message from the queue, and if the producer starts first, the message is sent to the queue, Consumer launch will be directly in the queue to the producers to send the message content. If the consumer starts first, it blocks and waits for the producer to send information to the queue. When a producer sends a message, the task ends, and the consumer waits for new information to be obtained. Acknowledgment message does not lose the method No-ack = False If the consumer encounters the situation (its channel is closed, connection is closed, or TCP connection is Los T) is hung, then RABBITMQ will re-add the task to the queue. Set conditions on the consumer side.
    • Producer, code ditto, unchanged
    • Consumer Code
Import pikaimport time# Link rabbitconnection = pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ') # Create channel = Connection.channel () # If the producer does not run the Create queue, then the consumer creates the queue, if the queue already exists, The Create queue operation is ignored Channel.queue_declare (queue= ' hello ') # callback function def callback (ch, method, properties, body):    print ("[x] Received%r "% body"    time.sleep ()    print (' OK ')    ch.basic_ack (Delivery_tag = method.delivery_tag) # When the above message processing is complete, notify RABBITMQ, message processing complete, do not send the Channel.basic_consume (callback,                      queue= ' Hello ',                      no_ack=false)  # Indicates that after consuming this message, the active notification RABBITMQ completion status, if not notified, RABBITMQ will put this message back into the queue, to avoid losing print (' [*] waiting for messages. To exit Press CTRL + C ') channel.start_consuming ()

When the producer generates a piece of data that is received by the consumer and the consumer is interrupted if it is not more than 10 seconds, the data is still in the connection. After more than 10 seconds, the data will disappear after reconnecting. The consumer waits for a connection.

Durable messages are not lost (message persistence)

This queue_declare needs to be set in both producer (product) and consumer (consumer) code.

    • Producer Code
#!/usr/bin/env pythonimport pika# link Rabbit Server connection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ') # Create channel = Connection.channel () # Create queue, use durable method channel.queue_declare (queue= ' test ', durable=true)                    # If you want the queue to be persistent then add durable=truechannel.basic_publish (exchange= ',                  routing_key= ' test ',                  body= ' Hello world! ',                  Properties=pika. Basicproperties (                      delivery_mode=2,                  # Mark our message as persisted-by setting the Delivery_mode property to 2, persist on the producer side                  ) # This exchange parameter is the name of the exchange. An empty string identifies the default or anonymous exchange: If Routing_key exists, the message is routed to the queue specified by Routing_key. Print ("[X] Start queue") Connection.close ()
    • Consumer Code
#!/usr/bin/env python#-*-coding:utf-8-*-import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ') # creates a channel Channel = Connection.channel () # Create queue, use durable method, Durable=true Open persistent channel.queue_declare (queue= ' test ', Durable=true) def callback (ch, method, properties, body):    print ("[x] Received%r "% body"    Import Time    time.sleep ()    print (' OK ')    ch.basic_ack (Delivery_tag = Method.delivery_tag) Channel.basic_consume (callback,                    queue= ' Hello ',                    no_ack=false) print (' [*] wait queue. To exit Press CTRL + C ') channel.start_consuming ()

Note: Marking a message for persistence does not guarantee that the message will not be lost, although it tells RABBITMQ to save the message to disk, there is still a short time window when RABBITMQ receives the message, and RABBITMQ does not synchronize Fsync (2) for each message. This persistence guarantee is not very strong, but it is much better than our simple task queue, if you want very stable messages not to be lost, you can use Publisher confirms.

Message Order Acquisition

The data in the default message queue is taken by the consumer in order, for example, the consumer 1 goes to the queue to get the odd sequence task (take the task 1,3,5,7), the consumer 2 goes to the queue to get even sequence task (take task 2,4,6,8). If the Customer 1 processing the task quickly, when he completed 1, 3 tasks, the consumer 2 may be 2 tasks have not finished processing, but the consumer 1 will continue to follow the order to take the 5th task instead of the 4th task, complete the fifth task, in the implementation of the seventh task. In order to change this default fetch task order, we need to change the parameter Channel.basic_qos (prefetch_count=1) to indicate who will fetch, not in the odd even order.

    • Producer Code
#!/usr/bin/env python3#-*-coding:utf-8-*-import pika Import sys connection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () # Set queue for persistent queue Channel.queue_declare (queue= ' Task_queue ', durable=true) message = '. Join (sys.argv[1:]) or "Hello world!" Channel.basic_publish (exchange= ",                   Routing_key= ' Task_queue ',                   body=message,                   Properties=pika. Basicproperties (                      Delivery_mode = 2, # Sets the message to persistent                   
    • Consumer Code,
#!/usr/bin/env python 3#-*-coding:utf-8-*-import pikaimport timeconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.queue_declare (queue= ' Task_queue ', durable=true)  # set Queue persistence def callback (ch, method, properties, body):    print ("[x] Received%r"% body)    Time.sleep    print (' OK ')    ch.basic_ack (Delivery_tag = Method.delivery_tag) Channel.basic_qos (Prefetch_ Count=1) # Indicates who is to fetch, not in accordance with odd even order Channel.basic_consume (callback,                  queue= ' Task_queue ',                  no_ack=false) print (' [*] Waiting for messages. To exit Press CTRL + C ') channel.start_consuming ()
Publish subscriptions Exchange types are available: Direct, topic, headers, and fanout.
    • Fanout: All bind to this Exchange queue can accept messages
    • Direct: The only queue that is determined by Routingkey and exchange can accept messages
    • Topic: All Routingkey (an expression) that matches the bind queue of Routingkey
When we send messages to the queue, we don't actually put them directly into the queue, but instead we hand them over to exchange, which is then put into the specified queue by Exchange. Imagine that when we want to send a message to multiple queues, without exchange we need to send multiple lines to different queues, but if we have exchange, it will first establish a binding relationship with the target queue, and when we send a message to exchange, Exchange sends this message to all of the teams that are bound to it, based on the binding relationship between the previous and the queue.

The difference between a publish subscription and a simple message queue is that a publish subscription sends a message to all subscribers, and the data in the message queue disappears once it is consumed. Therefore, when RABBITMQ implements the publication and subscription, a queue is created for each subscriber, and the message is placed in all relevant queues when the two publishers publish the message. A publish subscription is essentially the publisher sending a message to Exchange,exchange to send the message to all queues that bind to it.

All queues for exchange type = Fanout and Exchange binding relationships receive information

    • Publisher code
#!/usr/bin/env python3import pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' logs ', Type= ' Fanout ') # created an Exchange name called Logs,type=fanout. With Exchange, we don't need to create a queue. Message = ". Join (sys.argv[1:]) or" Info:hello world! " Channel.basic_publish (exchange= ' logs ', routing_key= ', body=message) # After you specify exchange, you don't need to specify a queue, all routing_key= ' Print ("[x] Sent%r"% message) Connection.close ()
    • Subscriber Code
#!/usr/bin/env python3import pikaconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' logs ', Type= ' fanout ')  # created Exchangeresult = Channel.queue_declare (exclusive=true) # does not specify a queue name, randomly generates a unique queue, Automatic deletion of temporary queue after queue queue_name = result.method.queue            # queue name takes a temporary queue Channel.queue_bind assigned by the server (exchange= ' logs ', queue=queue _name)  # The temporary queue and exchange bindings print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch, method, properties, body):   # callback method    Print ("[x]%r"% body) channel.basic_cons Ume (Callback, Queue=queue_name, no_ack=true)  # Message Receive channel.start_consuming ()   # Keep Listening state

  

Keyword send

Exchange type = DirectIn the previous example, when a message was sent explicitly specifying a queue and sending a message to it, RABBITMQ also supports sending by keyword, that is, the queue binding keyword, where the sender sends the data according to the keyword to the message exchange,exchange according to the keyword should send the data to the specified queue.
    • Producer Code
#!/usr/bin/env python3#coding:utf8###################### #生产者 ################ #import pikaimport sysconnection = Pika . Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' direct_ Logs ', type= ' direct ') severity = sys.argv[1] If len (SYS.ARGV) > 1 Else ' info ' message = '. Join (sys.argv[2:]) or ' Hello ' world! ' Channel.basic_publish (exchange= ' direct_logs ',                      routing_key=severity,                      body=message) print ("[x] Sent%r:%r"% (severity, message)) Connection.close ()
    • Consumer Code
#!/usr/bin/env python3#coding:utf8import pikaimport sys########### #消费者 # # # # # # # Connection connection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' direct_ Logs ', type= ' direct ') result = Channel.queue_declare (exclusive=true) queue_name = result.method.queue# Serverites is a list , storing the keyword, the keyword is obtained by sys.argv severities = Sys.argv[1:]if not Severities:sys.stderr.write ("Usage:%s [INFO] [WARNING] [ERROR ]\n "% sys.argv[0]) Sys.exit (1) # Loop binding keyword and exchangefor severity in Severities:channel.queue_bind (exchange= ' Direct_log S ', Queue=queue_name, routing_key=severity) print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch, method, properties, body): Print ("[x]%r:%r"% (Method.routing_key, body)) Chann El.basic_consume (Callback, Queue=queue_name, No_ack=true) channel.start_consuming ()

Fuzzy matching

Exchange type = TopicUnder the topic type, you can have the queue bind several fuzzy keywords, after which the sender sends the data to exchange, Exchange passes in the routing values and keywords to match, and the match succeeds, sending the data to the specified queue.
    1. # indicates that 0 or more words can be matched
    2. * indicates a match to 1 words
Sender route Value Queue Old.boy.python old.*--mismatch Old.boy.python old.#--match
    • Consumer Code
#!/usr/bin/env python3#coding:utf8import pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' topic_ Logs ', type= ' topic ') result = Channel.queue_declare (exclusive=true) queue_name = Result.method.queuebinding_keys = Sys.argv[1:]if not Binding_keys:    sys.stderr.write ("Usage:%s [binding_key]...\n"% sys.argv[0])    sys.exit (1) For Binding_key in Binding_keys:    channel.queue_bind (exchange= ' topic_logs ',                       queue=queue_name,                       routing_ Key=binding_key) Print (' [*] waiting for logs. To exit Press CTRL + C ') def callback (ch, method, properties, body):    print ("[x]%r:%r"% (Method.routing_key, body)) Chan Nel.basic_consume (callback,                      Queue=queue_name,                      no_ack=true) channel.start_consuming ()
    • Producer Code
#!/usr/bin/env python3#coding:utf8import pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' topic_ Logs ', type= ' topic ') Routing_key = sys.argv[1] If len (SYS.ARGV) > 1 Else ' anonymous.info ' message = '. Join (sys.argv[2:] ) or ' Hello world! ' Channel.basic_publish (exchange= ' topic_logs ',                      Routing_key=routing_key,                      body=message) print ("[x] Sent%r:%r "% (routing_key, message)) Connection.close ()

  

The rabbitmq of Python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.