Python learning notes 3-celery distributed task processor, python3-celery
Celery is an asynchronous task framework written in python. It has very powerful functions. For details, refer to the official website. Here we mainly provide a demo for you to use this framework quickly. 1. by default, redispip install celeryredis is installed as the carrier of task messages. tasks. py
Import sysreload (sys) sys. setdefaultencoding ('utf-8') # If this sentence is not added, an error occurs when printing the Chinese log. from celery import Celery celery Celery = celery ('task', broker = 'redis: // 127.0.0.1: 6379/0 ') # select local redis db = 0 as the message carrier. The first parameter is the task name from celery. utils. log import get_task_logger # import the log Module logger = get_task_logger (_ name _) in celery @ celery. task (bind = True, max_retries = 10, default_retry_delay = 1*6) # bind indicates enabling, max_retries indicates the number of retries, and default_retry_delay indicates the default interval, try time def exec_task_order_overtime (self, order_id): # after the order expires, execute the task try: logger.info ('==========================> exec_task_order_overtime order_id = % s' % order_id) success = BaseHandler. context_services.order_overtime_task_service.process_over_time (order_id) if success is False: logger. error ('<=================== order_overtime_task_service.process_over_time Failed, order_id = % s' % order_id) raise Return (False) else: logger.info ('<===================== order_overtime_task_service.process_over_time Success, order_id = % s' % order_id) cannot Exception as exc: logger.info ('exec _ task_order_overtime retry, order_id = % s' % order_id) raise self. retry (exc = exc, countdown = 3) # Try again after 3 seconds. The countdown priority here is higher than default_retry_delay In the decorator
Run the command in the file path. After that, celery starts to run the task $ celery worker-A tasks -- loglevel = info. What about the producer? After executing the following statement, the celery key of db = 0 in redis is a list type, which stores the execution tasks. If celery is not enabled, you can see it clearly. If celery is enabled, it may have been executed.
From celery import Celery celery = Celery ('task', broker = 'redis: // 127.0.0.1: 100') # message carrier push_task_id = celery.send_task('tasks.exe c_task_order_overtime ', [order_id] # parameter, must be list. The source code is visible. The third can be dict. We didn't use it here, countdown = 10) # How long is the delay to execute the PUSH message?
Question 1:
Some people may wonder why exec_task_order_overtime has self. Sometimes they find no. the difference lies in the difference between exec_task_order_overtime and the decorator task. If @ celery. task, not The decorator function call, there is no self, the bind argument to The task decorator will give access
self
(The task type instance) Question 2: parameters of the celery constructor. the first is the module name, that is, the name of this file, and the second is the redis carrier address, see The first argument
Celery
Is the name of the current module, this only needed so names can be automatically generated when the tasks are defined in
_ Main __Module. the second argument is the broker keyword argument, specifying the URL of the message broker you want to use. here using RabbitMQ (also the default option ). question 3: When the celery memory is insufficient, there is no feedback mechanism: we know that in socket network transmission, when the receiver is too late to process, the sender will be blocked; celery has no mechanism at all; we don't need to block the sending end, and of course we can't, but when our celery is too late to process, we should cache some data in redis, although it will cause delayed message processing, however, there may not be problems with insufficient memory. Here, you may need to do some processing by yourself. For example, if the number of retries is controlled as little as possible, record the results after the task fails to be executed, based on the business needs, for example, 10 minutes later, it will be thrown into the redis message queue again.