1, producer
new_task.py
Import pikaif __name__ = = ' __main__ ': Connection=pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel=connection.channel () channel.queue_declare ("Kadima") message= "you are awsome! " For I in Range (0,100): #循环100次发送消息 channel.basic_publish (exchange= "", routing_key= ' Kadima ', body=message+ "" +str (i)) Print "Sending", message
2, multiple consumers
Consumer 1,work.py
#-*- coding: utf-8 -*-import timeimport pikaimport sys__author__ = ' Yue ' Var=0def callback (ch, method, properties, body): # temp= var+1 #这里有趣的是不能写成var +=1 or var=var+1, you need to know why "Python global variables and local variables" #global var #var +=1 #if var==20: #sys. Exit () print "1 received %r" % (body,) time.sleep (Body.count (".")) print "Done" if __name__ == ' __main__ ': Connection=pika. Blockingconnection (Pika. Connectionparameters ("localhost")) channel=connection.channel () Channel.queue_declare ("Kadima") channel.basic_consume (callback,queue= "Kadima", no_ack= True) print ' [1] waiting for messages ' channel.start_consuming ()
work2.py
Import timeimport pika__author__ = ' Yue ' def callback (ch, method, properties, body): print "2 received%r"% (body,) Time.sleep (Body.count (".")) Print "Done" if __name__ = = ' __main__ ': Connection=pika. Blockingconnection (Pika. Connectionparameters ("localhost")) Channel=connection.channel () channel.queue_declare ("Kadima") Channel.basic_con Sume (callback,queue= "Kadima", no_ack=true) print ' [2] waiting for messages ' channel.start_consuming ()
3, execute Work,work2,new_task
My boot order is WORK,WORK2, as can be seen from the execution results, RABBITMQ is to distribute the task sequentially to the work registered in chronological order,
That is, TASK1,TASK2,TASK3,TASK4, it will be Task1, TASK3 distributed to work, the other two distributed to TASK3,TASK4
Next, the interesting thing is going to happen:
When the annotation contents of the callback function in the work.py are opened (the action is to get work to process 19 tasks and then exit the program), MQ does not distribute the task that was supposed to be distributed to the WORK2. I assume for the time being that the work exits did not tell MQ that he was quitting (he was an exception), and MQ would still distribute the task to working
4, what about the tasks that are not done?
Message Acknowledgment:ack is turned on by default
Modify the work code as follows
#-*- coding: utf-8 -*-import timeimport pikaimport sys__author__ = ' Yue ' Var=0def callback (ch, method, properties, body): # temp= var+1 #这里有趣的是不能写成var +=1 or var=var+1, you need to know why "Python global variables and local variables" are needed global var var+=1 if var==20: sys.exit () print "1 received %r" % (body,) time.sleep (Body.count (".")) print "Done" ch.basic_ack (delivery_tag = Method.delivery_tag) if __name__ == ' __main__ ': connection=pika. Blockingconnection (Pika. Connectionparameters ("localhost")) channel=connection.channel () Channel.queue_declare ("Kadima") channel.basic_consume (caLlback,queue= "Kadima", no_ack=true) print ' [1] Waiting for Messages ' channel.start_consuming ()
Work only executes to 20, but WORK2 does not execute all from 22, but begins with 37 MQ, which frankly, I'm a little confused, and later I want to add
Rabbitmq how to deal with the abnormal consumption situation