Getting started with celery and Redis

Source: Internet
Author: User
Tags redis server

celery is a task-processing system widely used in Web applications.

It can be used in the following situations:

make network calls in the request response cycle . The server should respond to any network requests immediately. If a network call is required during the request response period, the call should be completed outside of the cycle. For example, when a user registers on a website, an activation message needs to be sent. Sending a message is a network call that takes 2-3 seconds. Users should not have to wait for this for 2-3 seconds. As a result, sending an activation message should be done outside of the request response cycle, which celery can achieve.

divide a large task consisting of several separate parts into smaller tasks . Suppose you want to know the time flow of Facebook users. Face books provide different endpoints to get different data. For example, an endpoint is used to get a picture of a user's time stream, an endpoint gets a blog post in the user's time stream, an endpoint is a user's liking information, and so on. If your function needs to communicate with the 5 endpoints of the face book, each network call takes an average of 2 seconds, you will need 10 seconds to complete the function execution. However, you can divide the work into 5 separate tasks (which you will soon find easy to do) and let celery handle these tasks. The celery can communicate with these 5 endpoints in parallel, and the response of all endpoints can be obtained within 2 seconds.

A simple celery example

Suppose we have a function and pass it to a list of URLs. This function needs to get the response of these URLs.

No use of celery

To create a file celery_blog.py :

ImportRequestsImportTime def func(URLs):Start = Time.time () forUrlinchUrls:resp = requests.get (URL)PrintResp.status_codePrint "It took", Time.time ()-Start,"Seconds"if__name__ = ="__main__": Func (["Http://oneapm.com","Http://jd.com","Https://taobao.com","Http://baidu.com","Http://news.oneapm.com"])

Run:

python celery_blog.py

Output:

Using celery

The most important component of the program calling celery is the celery worker.

In the example of Web application registration, the celery worker is used to send messages.

In the case of Facebook, the celery worker is used to get different URLs.

In our celery_blog.py example, the celery worker is used to get the URL.
Celery worker and your application/script are different processes that run independently of each other. So your application/scripts and celery need some way to communicate with each other.

The application code needs to place the task where the celery worker can be removed and executed. For example, the application code places the task in the message queue, celery the worker to pick up the task from the message queue and perform the task. We will use Redis as the message queue.

Make sure that you have Redis installed and that you can run it redis-server .

Make sure you have celery installed.

Modify celery_blog.py the file as follows:

 fromCeleryImportCeleryapp = Celery (' Celery_blog ', bloker=' REDIS://LOCALHOST:6379/1 ')@app. Task def fetch_url(URL):resp = requests.get (URL)PrintResp.status_code def func(URLs):      forUrlinchUrls:fetch_url.delay (URL)if__name__ = ="__main__": Func (["Http://oneapm.com","Http://jd.com","Https://taobao.com","Http://baidu.com","Http://news.oneapm.com"])

Code Explanation: We need a celery instance to start the program, so we have created a celery instance named app.

Start up in 3 terminals:

First terminal, runningredis-server

The second terminal, running celery worker -A celery_blog -l info -c 5 , can be seen through the output celery successfully running.

Third terminal, run scriptpython celery_blog.py

You can see the second terminal output as follows:

Save celery code and configuration in different files

In the example above, we have only written a celery task. However, your project may involve multiple modules, and you may want to have different tasks in different modules. So let's move the celery configuration to a separate file.

Createcelery_config.py

fromimport Celeryapp = Celery(‘celery_config‘, broker=‘redis://localhost:6379/0‘, include=[‘celery_blog‘])

Modify the celery_blog.py code as follows:

ImportRequests fromCelery_configImportApp@app. Task def fetch_url(URL):resp = requests.get (URL)PrintResp.status_code def func(URLs):     forUrlinchUrls:fetch_url.delay (URL)if__name__ = ="__main__": Func (["Http://oneapm.com","Http://jd.com","Https://taobao.com","Http://baidu.com","Http://news.oneapm.com"])

Stop before celery worker , run:

-A-l-c5

Open Ipython and run the following command:

In [1from celery_blog import funcIn [2]: func(["http://oneapm.com""http://jd.com""https://taobao.com""http://baidu.com""http://news.oneapm.com"])

The output is as follows:

Add a new task to a different file

You can add a new module and define a task in that module. Create a module with the following celery_add.py :

fromimport app@app.taskdef add(a, b):    return a + b

celery_config.pyThe change contains the new module celery_add.py , as follows:

fromimport Celeryapp = Celery(‘celery_config‘, broker=‘redis://localhost:6379/0‘, include=[‘celery_blog‘‘celery_add‘])

In Ipython input:

In [1addIn [2add.delay(45)

The output is as follows:

Separate use of Redis and celery on different machines

So far, our scripts, celery workers, and Redis are all running on the same machine. In fact, there is no such need, these three can be run on different machines.

Celery tasks involve network requests, so using celery worker on a network-optimized machine can improve the speed of tasks. Redis is a memory database that runs more efficiently on memory-optimized machines.

In this example, I will run the script and celery worker on the local system, running Redis on the separate server.

Modified celery_config.py to:

Celery(‘celery_config‘, broker=‘redis://192.168.118.148:6379/0‘include=[‘celery_blog‘])

Now that I run any task, the script will put him on the Redis running Server (192.168.118.148).

Celery worker also communicates with 192.168.118.148, gets the task on this Redis server and executes it.

Note: You must use the server address that is running redis-server. My server has stopped Redis, so you will not be able to connect to Redis.

Reference article: Getting started with celery and Redis

This article is a OneAPM engineer finishing. OneAPM is an emerging leader in application performance management, enabling enterprise users and developers to easily implement slow program code and real-time crawling of SQL statements. To read more technical articles, please visit the OneAPM official blog.

Getting started with celery and Redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.