Celery's Practice guide celery principle: Celery actually implements a typical producer-consumer model of message processing/task scheduling, and consumers (workers) and producers (clients) can have any of them communicating through the message system (broker). The typical scenario is:
- The client initiates a process (producer), and when certain operations of the user take a long time or are more frequent, consider accessing the message system and sending a task to the broker.
- A worker process (consumer) is started in the background, and when it is found that a task has been saved in the broker to the execution time, he will take it and execute it according to the task type and parameters.
Typical scenarios in practice:
- Simple Scheduled Tasks:
- Replace crontab's celery notation:
From celery import celery
From Celery.schedules import crontab
App = Celery ("Tasks", backend= "Redis://localhost", broker= "Redis://localhost")
App.conf.update (Celerybeat_schedule = {
"Add": {
"Task": "Celery_demo.add",
"Schedule": Crontab (minute= "*"),
"Args": (16, 16)
},
})
@app. Task
def add (x, y):
return x + y
- Run the celery worker, let him run as consumer, and automatically get the task from the broker and execute it.
- ' Celery-a Celery_demo worker '
- Run the celery client to automatically produce the task msg according to schedule, and publish it to the broker.
- ' Celery-a Celery_demo beat '
- Install and run flower for easy monitoring of task running status
- ' Celery flower-a celery_demo '
- or set the login password ' celery flower-a celery_demo--basic_auth=user1:password1,user2:password2
- Multi-sync tasks-chained tasks-
- Task that failed to automatically retry
- Failed Retry method: Add the task code function parameter to self, bind bind at the same time.
- Demo Code:
- @app. Task (Bind=true, default_retry_delay=300, max_retries=5)
def my_task_a (self):
Try
Print ("Doing stuff here ...")
Except Somenetworkexception as E:
Print ("Maybe do some clenup ....")
Self.retry (e)
- After automatic retries, do you want to queue up the task after it is queued, or wait for the specified time? Can be specified by the Self.retry () parameter.
- Dispatched to a different queue of queue task
- A task is automatically mapped to multiple queue methods by configuring the Routing_key naming pattern for task and queue.
- For example: Configure the queue's exchange and routing_key as a common pattern:
- Define the name of the Routing_key for the task:
- The different Exchange policies that are available:
- Direct: Directly by definition Routing_key
- Topic:exchange pushes a message to more than one queue based on a wildcard character.
- Fanout: Splitting messages into different queues, typically for extra-large tasks and time-consuming tasks.
- Reference: Http://celery.readthedocs.org/en/latest/userguide/routing.html#routers
- Advanced Configuration
- Whether result is saved
- Failed Email Notification:
- Close Rate Limit:
- Auto_reload method (*nix System):
- Celery automatically reload by monitoring changes to the source code directory
- How to use: 1. Dependent on inotify (Linux) 2. Kqueue (OS X/bsd)
- Install Dependent: $ pip Install pyinotify
- Optionally, specify the dependency of the Fsnotify: $ env celeryd_fsnotify=stat celery worker-l Info--autoreload
- Start: celery-a appname worker--autoreload
- Auto-scale Method:
- Enable Auto-scale
- Temporarily increase the number of worker processes (increase consumer): $ celery-a proj control Add_consumer foo-d worker1.local
- Temporarily reduce the number of worker processes (reduce consumer):
- To change the configuration of the scheduled task from app.conf to DB:
- You need to specify the custom schedule class name at startup, such as the default: Celery.beat.PersistentScheduler.
- Celery-a proj Beat-s Djcelery.schedulers.DatabaseScheduler
- To start the Stop worker method:
- Start as Daemon:http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing
- Root user can use Celeryd
- Non-privileged users: Celery multi start worker1-a appname-autoreload--pidfile= "Hom e /r Un/cele< Span id= "mathjax-span-20" class= "Mi" >ry / Home/run/celery/home/log/celery/%n.log "
- or celery Worker-detach.
- Stop it
- PS AUXWW | grep ' Celery worker ' | awk ' {print $} ' | Xargs kill-9
- Methods for integration with flask
- After integration, Flask will act as producer to create and send a task to the broker, and the standalone worker process initiated at celery will get the task from the broker and execute, returning the result.
- Method of obtaining a task result asynchronously in flask: Add.delay (x, y), sometimes needing to be named after the parameter is passed or Add.apply_async (args= (x, y), countdown=30)
- Flask get
- Boot issues after integration with flask
- Because the default routing_key of celery is set according to the import level of the producer in the code, the worker side should be aware at startup that its startup directory should be on the top-level directory of the project and that no one will appear keyerror.
- Performance improvements: Eventlet and Greenlet
Official reference: http://docs.celeryproject.org/en/latest/userguide/index.html
A practical guide to celery