Python + Pika + RabbitMQ environment deployment and implementation of work queues

Source: Internet
Author: User
Tags rabbitmq
RabbitMQ is a message queue server. in this article, we will learn how to deploy the Python + Pika + RabbitMQ environment and how to implement the workflow queue. if you need a friend, refer to the Chinese translation of rabbitmq, it is mainly on the mq letter: Message Queue, that is, the meaning of Message Queue. There is also the rabbit word above, which is rabbit's meaning. just like python, foreigners are still humorous. The rabbitmq service is similar to mysql and apache services, but provides different functions. Rabbimq is a service used to send messages. it can be used for communication between different applications.

Install rabbitmq
First install rabbitmq. in ubuntu 12.04, you can directly install it through apt-get:

sudo apt-get install rabbitmq-server

After the installation, the rabbitmq service is started. Next, let's take a look at how to write Hello World in python! . The content of the instance is to send "Hello World!" from send. py !" To rabbitmq, receive. py receives messages sent by send. py from rabbitmq.

P indicates produce, the meaning of the producer, also known as the sender, and the instance shows send. py; C indicates the consumer, which is also called the receiver. the instance is displayed as receive. py; the red in the middle indicates the queue, and the instance shows as a hello queue.

Python uses the rabbitmq service and can use the ready-made class libraries pika, txAMQP, or py-amqplib. here pika is selected.

Install pika

You can use pip to install pika. pip is a python software management package. if it is not installed, you can use apt-get to install it.

sudo apt-get install python-pip

Install pika using pip:

sudo pip install pika

Send. py code

Connect to the rabbitmq server. because it is tested locally, you can use localhost.

connection = pika.BlockingConnection(pika.ConnectionParameters(        'localhost'))channel = connection.channel()

Declares a Message Queue. messages are transmitted in this queue. If a message is sent to a non-existent queue, rabbitmq automatically clears the message.

channel.queue_declare(queue='hello')

Send a message to the declared hello queue, where exchange indicates the switch, which can precisely specify the queue to which the message should be sent. routing_key is set to the queue name, and the body is the sent content, do not pay attention to the specific sending details.

channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')

Close connection

connection.close()

Complete Code

#!/usr/bin/env python#coding=utf8import pika connection = pika.BlockingConnection(pika.ConnectionParameters(        'localhost'))channel = connection.channel() channel.queue_declare(queue='hello') channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')print " [x] Sent 'Hello World!'"connection.close()

Run this program first. if the program is successfully executed, rabbitmqctl should be added to the hello queue, and there should be a message in the queue. run the rabbitmqctl command to check the program.

rabbitmqctl list_queues

Output the following information on the author's computer:

There is indeed a hello queue, and there is a message in the queue. Next, use receive. py to obtain information in the queue.

Receive. py code

Like the previous two steps of send. py, you must first connect to the server and then declare the message queue. the same code will not be pasted here.

Receiving messages is more complicated. you need to define a callback function to process the messages. the callback function here prints the information.

def callback(ch, method, properties, body):  print "Received %r" % (body,)

Tells rabbitmq to use callback to receive information

channel.basic_consume(callback, queue='hello', no_ack=True)

Start to receive information and enter the blocking status. only information in the queue can call callback for processing. Press ctrl + c to exit.

channel.start_consuming()

Complete Code

#!/usr/bin/env python#coding=utf8import pika connection = pika.BlockingConnection(pika.ConnectionParameters(        'localhost'))channel = connection.channel() channel.queue_declare(queue='hello') def callback(ch, method, properties, body):  print " [x] Received %r" % (body,) channel.basic_consume(callback, queue='hello', no_ack=True) print ' [*] Waiting for messages. To exit press CTRL+C'channel.start_consuming()

Execute the program to receive the hello World message in the queue Hello !, Then print it on the screen. Change the terminal and run send. py again. you can see that the receive. py side will receive the information again.

Work queue example

1. preparations)

In the instance program, new_task.py is used to simulate task allocators and worker. py is used to simulate workers.

Modify send. py, receive information from the command line parameters, and send

import sys message = ' '.join(sys.argv[1:]) or "Hello World!"channel.basic_publish(exchange='',           routing_key='hello',           body=message)print " [x] Sent %r" % (message,)

Modify the callback function of receive. py.

import time def callback(ch, method, properties, body):  print " [x] Received %r" % (body,)  time.sleep( body.count('.') )  print " [x] Done"

Start two terminals and run worker. py. The two workers are in the listening status. Open the third terminal and run new_task.py

$ python new_task.py First message.$ python new_task.py Second message..$ python new_task.py Third message...$ python new_task.py Fourth message....$ python new_task.py Fifth message.....

Observe that worker. py receives the task. one worker receives the following three tasks:

$ python worker.py [*] Waiting for messages. To exit press CTRL+C [x] Received 'First message.' [x] Received 'Third message...' [x] Received 'Fifth message.....'

Another worker receives two tasks:

$ python worker.py [*] Waiting for messages. To exit press CTRL+C [x] Received 'Second message..' [x] Received 'Fourth message....'

From the above, every worker is assigned to the task in sequence. If a worker fails to process the task, the task is not completed and should be handled by other workers. Therefore, there should be a mechanism in which a worker will report the message when completing the task.

2. Message acknowledgment)

Message confirmation means that when the worker completes the task, the feedback is sent to rabbitmq. Modify the callback function in worker. py:

def callback(ch, method, properties, body):  print " [x] Received %r" % (body,)  time.sleep(5)  print " [x] Done"  ch.basic_ack(delivery_tag = method.delivery_tag)

The pause lasts for 5 seconds, so that ctrl + c can exit.

You can also remove the no_ack = True parameter or set it to False.

channel.basic_consume(callback, queue='hello', no_ack=False)

Run this code. even if one of the workers ctrl + c exits, the tasks being executed will not be lost. rabbitmq will re-allocate the tasks to other workers.

3. Message durability)

Despite the message feedback mechanism, if rabbitmq fails, the task will still be lost. Therefore, the task needs to be stored persistently. Declare persistent storage:

channel.queue_declare(queue='hello', durable=True)

However, this program will execute an error because the hello queue already exists and is not persistent. rabbitmq cannot use different parameters to redefine the queue. Redefine a queue:

channel.queue_declare(queue='task_queue', durable=True)

When sending a task, use delivery_mode = 2 to mark the task as persistent storage:

channel.basic_publish(exchange='',           routing_key="task_queue",           body=message,           properties=pika.BasicProperties(             delivery_mode = 2, # make message persistent           ))

4. Fair scheduling (Fair dispatch)

In the above example, although each worker is assigned to a task in sequence, each task is not necessarily the same. Some tasks may be heavy, and the execution time is relatively long; some tasks are light and the execution time is relatively short. If fair scheduling is possible, use basic_qos to set prefetch_count = 1, so that rabbitmq does not assign multiple tasks to workers at the same time, that is, only after the workers finish the tasks, to receive the task again.

channel.basic_qos(prefetch_count=1)

New_task.py complete code

#! /Usr/bin/env pythonimport pikaimport sys connection = pika. blockingConnection (pika. connectionParameters (host = 'localhost') channel = connection. channel () channel. queue_declare (queue = 'task _ queue ', durable = True) message = ''. join (sys. argv [1:]) or "Hello World! "Channel. basic_publish (exchange = '', routing_key = 'task _ queue ', body = message, properties = pika. basicProperties (delivery_mode = 2, # make message persistent) print "[x] Sent % r" % (message,) connection. close () worker. py complete code #! /Usr/bin/env pythonimport pikaimport time connection = pika. blockingConnection (pika. connectionParameters (host = 'localhost') channel = connection. channel () channel. queue_declare (queue = 'task _ queue ', durable = True) print' [*] Waiting for messages. to exit press CTRL + C' def callback (ch, method, properties, body): print "[x] stored Ed % r" % (body,) time. sleep (body. count ('. ') print "[x] Done" ch. basic_ack (delivery_tag = method. delivery_tag) channel. basic_qos (prefetch_count = 1) channel. basic_consume (callback, queue = 'task _ queue ') channel. start_consuming ()


For more articles about Python + Pika + RabbitMQ environment deployment and implementation of work queues, refer to PHP Chinese network!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.