In Python, the MQ Message Queue Implementation of threads and advantages Analysis of message queues, pythonmq

Source: Internet
Author: User

In Python, the MQ Message Queue Implementation of threads and advantages Analysis of message queues, pythonmq

"Message Queue" is the container that stores messages during message transmission. The message queue manager acts as a man-in-the-middle when a message is relayed from its source to its target. The main purpose of a queue is to provide a route and ensure message transmission. If the receiver is unavailable when a message is sent, the Message Queue retains the message until it can be passed successfully. I believe that message queue is a crucial component for any architecture or application. The following are ten reasons:

Example of a message queue in Python:

1. threading + Queue implement thread Queue

#! /Usr/bin/env python import Queueimport threadingimport time queue = Queue. queue () class ThreadNum (threading. thread): "Does not print a number. How many seconds does it take to print 10 numbers concurrently? "Def _ init _ (self, queue): threading. thread. _ init _ (self) self. queue = queue def run (self): whileTrue: # The consumer obtains num = self from the queue. queue. get () print "I'm num % s" % (num) time. sleep (1) # After completing this task, use queue. the task_done () function sends a signal self to the completed queue of the task. queue. task_done () start = time. time () def main (): # generate a threads pool and pass the message to the thread function for processing. Here, 10 concurrent for I in range (10) are enabled ): t = ThreadNum (queue) t. setDaemon (True) t. start () # Fill in the error data in the queue for num in range (10): queue. put (num) # wait on the queue until everything has been processed queue. join () main () print "Elapsed Time: % s" % (time. time ()-start)

Running result:

i'm num 0i'm num 1i'm num 2i'm num 3i'm num 4i'm num 5i'm num 6i'm num 7i'm num 8i'm num 9Elapsed Time: 1.01399993896

Explanation:
The procedure is described as follows:
1. Create a Queue. Queue () instance and fill it with data.
2. Pass the filled data instance to the Thread class, which is created by inheriting threading. Thread.
3. Generate a daemon thread pool.
4. Each time a project is retrieved from the queue and the data and run methods in the thread are used to execute the corresponding work.
5. After the task is completed, use the queue. task_done () function to send a signal to the completed queue of the task.
6. Performing the join operation on the queue actually means that the main program is exited when the queue is empty.
When using this mode, note that by setting the daemon thread to true, the program automatically exits after the program runs. The advantage is that you can perform the join operation on the queue before exiting, or wait until the queue is empty.


2. multiple queues
For multiple queues, the output of one queue can be used as the input of another queue!

#! /Usr/bin/env pythonimport Queueimport threadingimport time queue = Queue. queue () out_queue = Queue. queue () class ThreadNum (threading. thread): "bkeep" def _ init _ (self, queue, out_queue): threading. thread. _ init _ (self) self. queue = queue self. out_queue = out_queue def run (self): whileTrue: # retrieve message num = self from the queue. queue. get () bkeep = num # Put bkeep into the queue self. out_queue.put (bkeep) # signals to queue job is done self. queue. task_done () class PrintLove (threading. thread): "Threaded Url Grab" def _ init _ (self, out_queue): threading. thread. _ init _ (self) self. out_queue = out_queue def run (self): whileTrue: # obtain the message from the queue and assign it to bkeep = self. out_queue.get () keke = "I love" + str (bkeep) print keke, print self. getName () time. sleep (1) # signals to queue job is done self. out_queue.task_done () start = time. time () def main (): # populate queue with data for num in range (10): queue. put (num) # spawn a pool of threads, and pass them queue instance for I in range (5): t = ThreadNum (queue, out_queue) t. setDaemon (True) t. start () for I in range (5): pl = PrintLove (out_queue) pl. setDaemon (True) pl. start () # wait on the queue until everything has been processed queue. join () out_queue.join () main () print "Elapsed Time: % s" % (time. time ()-start)

Running result:

I love 0 Thread-6I love 1 Thread-7I love 2 Thread-8I love 3 Thread-9I love 4 Thread-10I love 5 Thread-7I love 6 Thread-6I love 7 Thread-9I love 8 Thread-8I love 9 Thread-10Elapsed Time: 2.00300002098

 
Explanation:
ThreadNum Workflow
Define queue ---> inherit threading ----> initialize queue ----> define run function ---> get queue data ----> process data ----> put data to another queue --> send a signal to tell queue this items processed
 
Main function workflow:
---> Throwing data into a custom queue
---> For loop determine the number of started threads ----> instantiate the ThreadNum class ----> Start the thread and set the daemon
---> For loop determines the number of started threads ----> instantiate the PrintLove class ---> Start the thread and set it as a daemon
---> Execute join after the message in the queue is processed. Exit the main program.

After learning about the implementation of MQ, let's summarize it.Advantages of Message Queue:
1. Decoupling

It is extremely difficult to predict the future needs of a project at the beginning of the project. An implicit data-based interface layer is inserted in the message queue during the processing process. Both interfaces must be implemented during the processing process. This allows you to independently expand or modify the processing processes on both sides, as long as they comply with the same interface constraints.

2. Redundancy

Sometimes the data processing process fails. Unless the data is persistent, it will be lost forever. Message Queues persist data until they are completely processed. This avoids the risk of data loss. In the "insert-retrieve-Delete" paradigm adopted by many message queues, before a message is deleted from the queue, you need to clearly indicate that the message has been processed. Ensure that your data is stored securely until you have used it.

3. scalability

Because the Message Queue decouples your processing process, it is easy to increase the frequency of joining and processing the message. You only need to increase the processing process. You do not need to change the code or adjust parameters. Expansion is as simple as resizing the power button.

4. Flexibility & peak processing capability

When your application goes to the Hacker News homepage, you will find that the access traffic has climbed to an unusual level. In the case of a sharp increase in traffic, your application still needs to continue to play its role, but such sudden traffic bursts are not common; it would be a waste of resources to be put on standby based on the ability to handle such peak access requests. Using Message Queue allows key components to withstand increasing access pressure, rather than completely crashing because of requests that exceed the load. Please refer to our blog on Peak processing capability for more information.

5. Recoverability

When some components of the system become invalid, the system will not be affected. Message Queue reduces the coupling between processes. Even if a process that processes messages fails, messages added to the queue can still be processed after the system recovers. This ability to allow retry or delay in processing requests is usually a difference between a slightly inconvenient user and a frustrated user.

6. Delivery guarantee

The redundancy mechanism provided by message queue ensures that messages can be actually processed, as long as a process reads the queue. On this basis, IronMQ provides a "delivery only once" guarantee. No matter how many processes receive data from the queue, each message can be processed only once. This is possible because a message is "preordered" and is temporarily removed from the queue. Unless the client explicitly indicates that the message has been processed, the message will be put back in the queue and can be processed again after a configurable period of time.

7. Sorting guarantee

In many cases, the order of data processing is very important. The message queue is originally ordered, and data can be processed in a specific order. IronMO ensures that the message paste is processed in FIFO (first-in-first-out) Order. Therefore, the position of a message in the queue is to retrieve its position from the queue.

8. Buffer

In any important system, there will be elements that require different processing times. For example, loading an image takes less time than applying a filter. The message queue uses a buffer layer to help the most efficient task execution-writing to the queue can be processed as quickly as possible, without the constraints of the preparation process for reading from the queue. This buffer helps to control and optimize the speed at which data streams pass through the system.

9. Understanding data streams

In a distributed system, it is a huge challenge to get a general impression about how long user operations will take and why. The message series help easily identify poorly performing processing processes or fields through the frequency of message processing, and the data streams in these areas are not optimized.

10. Asynchronous Communication

In many cases, you do not need to process the message immediately. The message queue provides an asynchronous processing mechanism that allows you to put a message into the queue but does not process it immediately. You can put messages into the queue as much as you want, and then process them as you like.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.