Celery Common Configuration Summary "celery Configuring the number of workers and the maximum number of tasks performed by a single worker"

Source: Internet
Author: User
Tags json message queue redis serialization rabbitmq timedelta

I use the setting configuration:

#!/usr/bin/env python
import random from
kombu import serialization from
kombu import Exchange, Queue
Import Ansibleservice

serialization.registry._decoders.pop ("Application/x-python-serialize")

Broker_url = Ansibleservice.getconfig ('/etc/ansible/rabbitmq.cfg ', ' rabbit ', ' broker_url ')
celerymq = Ansibleservice.getconfig ('/etc/ansible/rabbitmq.cfg ', ' celerymq ', ' celerymq ')

secret_key= ' top-secrity '
Celery_broker_url = Broker_url
celery_result_backend = broker_url
celery_task_result_expires =
Celeryd_prefetch_multiplier = 4
celeryd_concurrency = 1
celeryd_max_tasks_per_child = 1
celery_ TIMEZONE = ' CST '
celery_task_serializer= ' json '
celery_accept_content=[' json ']
celery_result_ Serializer= ' json '
celery_queues = (
    Queue (CELERYMQ, Exchange (CELERYMQ), ROUTING_KEY=CELERYMQ),
)
Celery_ignore_result = True
celery_send_events = False
celery_event_queue_expires = 60

RMQ as a message queue.

Number of concurrent worker 25

Each worker is destroyed at most one task. (Perform full task, process destroy rebuild, free memory)


#-*-Coding:utf-8-*-  From datetime import Timedelta from settings import Redis_host, Redis_port, Redis_password, Redis_db_num # A queue that appears in a program that does not exist in the broker is created immediately celery_create_missing_queues = True Celery_imports = ("as  Ync_task.tasks "," Async_task.notify ") # Use Redis as task queue Broker_url = ' redis://: ' + Redis_password + ' @ ' + redis_host + ': ' + STR (redis_port) + '/' + str (redis_db_num) #CELERY_RESULT_BACKEND = ' redis://: ' + Redis_password + ' @ ' + redis_host + ' : ' + str (redis_port) + '/10 ' celeryd_concurrency = 20 # concurrent worker number Celery_timezone = ' Asia/shanghai ' celeryd_force_exec V = True # Very Important, some cases can prevent deadlock Celeryd_prefetch_multiplier = 1 celeryd_max_tasks_per_child = 100 # Each worker performs up to 100 any The service will be destroyed to prevent memory leaks # celeryd_task_time_limit = 60 # A single task does not run longer than this value, otherwise it will be killed by Sigkill signal # broker_transport_options = {' Visibil Ity_timeout ': 90} # Mission issueAfter a period of time has not received acknowledge, the task is handed over to other workers to execute celery_disable_rate_limits = True # Timer Task Celerybeat_schedule = {' msg _notify ': {' task ': ' async_task.notify.msg_notify ', ' Schedule ': Timedelta (seconds=10), # ' args ': (r edis_db), ' options ': {' queue ': ' My_period_task '}}, ' Report_result ': {' task ': ' ASYNC_TASK.TASKS.R Eport_result ', ' Schedule ': Timedelta (seconds=10), # ' args ': (redis_db), ' options ': {' queue ': ' My_peri Od_task '}, # ' Report_retry ': {# ' task ': ' Async_task.tasks.report_retry ', # ' schedule ': Timedelta (S econds=60), # ' options ': {' queue ': ' My_period_task '} #},} ################################################ #   command to start worker # * * * * * * * * * * nohup celery beat-s/var/log/boas/celerybeat-schedule--logfile=/var/log/boas/celerybeat.log -L Info & # * * * * worker # Nohup celery worker-f/var/log/boas/boas_celery.log-l info & #################### ############################
 








The above is a summary of my work.

At the same time, the other needs to explain

Celeryd_task_time_limit

Broker_transport_options

Use is very cautious, if the celeryd_task_time_limit set too small, will cause the TASK has not finished, the worker is killed; The broker_transport_options is too small to be set, and the task may be repeatedly executed multiple times.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.