RABBITMQ Website Tutorial---publish/subscribe

Source: Internet
Author: User
Tags rabbitmq amq

(using Python client Pika 0.9.8)

In the previous tutorial we created a work queue. Suppose that behind a work queue is every task that is passed to the right worker. In this section we will do something completely different-we will deliver a message to multiple consumers. This mode is called "Publish/Subscribe".


To illustrate this pattern, we will build a simple log system. It will consist of two programs--the first one will issue the log messages and the second will receive and print them.


This message is obtained for each copy of the receiving program that is running in our log system. In this way we will be able to run a receiver and log directly to disk, and we will run another receiver and look at the logs on the screen.


Essentially, the messages that are published are broadcast to all receivers.


Exchange

In the previous section of this tutorial we send and receive messages from a queue. Now it's time to introduce the complete message model in rabbit.


Let's take a quick look at some of the things that are involved in the previous tutorial:

? The producer is a user application that sends messages.

The queue is a buffer for storing messages

? The consumer is a user application that receives messages.


The core idea in RABBITMQ in the message model is that the producer never sends any messages directly to a queue. In fact, the producer does not even know at all whether a message will be passed to any queue.


Instead, producers can only send messages to one exchange. An exchange is a very simple thing to do. Side it receives messages from the producer on the other side it pushes the message to the queue. This exchange must know what the message it receives is doing. Should it be appended to a special queue? Should it be appended to many queues? Or should it be canceled? These rules are defined through the type of exchange.

There are several exchange-valid type:direct,topic,headers and fanout. We will focus on the last--fanout. Let's create an exchange of this type and invoke its log:

Channel.exchange_declare (exchange= ' logs ', type= ' fanout ')

Fanout Exchange is very simple. As you might guess from the name, it's just a broadcast of all the messages it receives to all of the queues it knows. And that's what our logger needs.

Listing exchanges in order to list exchange on the server you can use RABBITMQCTL to run: $ sudo rabbitmqctl list_exchangeslisting exchanges...logs Fanoutamq.direct directamq.topic  topicamq.fanout  fanoutamq.headers  headers ... done. There are some amq.* in this list Exchange and the default Exchange. Those things are created by default, but you can't use them at the same time. Nameless Exchange in the previous section of this tutorial we have no knowledge of exchange at all, but it can send messages to the queue. That's possible. Because we're using a default Exchange, this default Exchange is marked with an empty string. Before we post a message callback: Channel.basic_publish (exchange= ", routing_key= ' Hello ', body=message) This exchange parameter is named Exchange. An empty string represents the default or unnamed Exchange: If the message exists, the message is routed to the queue by the specified routing_key name.


Now, we can publish to a named Exchange:

Channel.basic_publish (exchange= ' log ', routing_key= ', body=message)


Temporary Queue

Like you might remember the queue we used earlier with a specified name (remember Hello and task_queue). Being able to name a queue is very important to us-we need to point workers to the same queue. When you want to share this queue between producers and consumers, it is important to take a name for a queue.


But that's not important to our logger. We want to receive messages about all the logs, not just a subset of them. We are also interested in the following message rather than in the old queue. We need two things to solve this problem.


First, no matter when we connect to rabbit we need an up-to-date empty queue. In order to do this we need to create a queue with a random name or even better-let the server choose a random queue for us to name. We can do this by giving queue_declare not to apply the queue parameter:

Result=channel.queue_declare ()


At this point, the result.method.queue contains a random queue name. For example it might look like AMQ.GEN-JZTY20BRGKO-HJMUJJ0WLG.


Second, we delete this queue once we have broken the connection with the consumer. Exclusive is the identity for this:

Result=channel.queue_declare (Exclusive=true)


binding

We have created fanout type of exchange and a queue. Now we need to tell this exchange to send a message to our queue. The relationship between Exchange and the queue is called binding.

Channel.queue_bind (exchange= ' logs ', queue=result.method.queue)

From now on this log exchange will append messages to our queue.

Listing bindings You can list the already existing bindings, as you guessed, Rabbitmqctl list_bindings.


put the code together .

The producer program, which issues log messages, does not look much different from the previous tutorials. The final change is that we now want to post a message to our log exhcnage instead of Exchange without a name. We need to provide a routing_key when sending, but its value will be ignored because of fanout type Exchange. Here is the code for the emit_log.py script:

#!/usr/bin/env pythonimport pikaimport sysconnection = Pika. Blockingconnection (Pika. Connectionparameters (        host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' Logs ',                         type= ' fanout ') message = '. Join (sys.argv[1:]) or "Info:hello world!" Channel.basic_publish (exchange= ' logs ',                      routing_key= ',                      body=message) print "[x] Sent%r"% (message,) Connection.close ()


As you can see, after the connection has been established, we have defined this exchange. This step is required, and the same release to a non-existent exchange is denied.


If there is no queue bound to this exchange, then the message will be lost, but it is not a problem for us; we can safely cancel the message if there is no consumer listening to it.


RECEIVE_LOGS.PYT's Code:

#!/usr/bin/env pythonimport pikaconnection = Pika. Blockingconnection (Pika.                         Connectionparameters (host= ' localhost ')) channel = Connection.channel () channel.exchange_declare (exchange= ' logs ', Type= ' fanout ') result = Channel.queue_declare (exclusive=true) queue_name = result.method.queuechannel.q Ueue_bind (exchange= ' logs ', queue=queue_name) print ' [*] waiting for logs. To exit Press CTRL + C ' def callback (ch, method, properties, body): print "[x]%r"% (body,) Channel.basic_consume (Callbac K, Queue=queue_name, No_ack=true) channel.start_consuming () 


?

RABBITMQ Website Tutorial---publish/subscribe

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.